Test Report: Hyper-V_Windows 18431

                    
                      80dc9090142297b85dde7abc1e10c47a59582e12:2024-03-18:33628
                    
                

Test fail (14/206)

x
+
TestAddons/parallel/Registry (75.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 24.4501ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-plfjv" [a64e15ae-07d9-44a1-9c6c-b119905c56b8] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0175437s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-htczk" [fb4c18c7-fe5e-4fbf-ad1c-582c178d397e] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0214145s
addons_test.go:340: (dbg) Run:  kubectl --context addons-748800 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-748800 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-748800 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.2279189s)
addons_test.go:359: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-748800 ip
addons_test.go:359: (dbg) Done: out/minikube-windows-amd64.exe -p addons-748800 ip: (2.7485982s)
addons_test.go:364: expected stderr to be -empty- but got: *"W0318 10:42:39.549576    1072 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n"* .  args "out/minikube-windows-amd64.exe -p addons-748800 ip"
2024/03/18 10:42:42 [DEBUG] GET http://172.25.150.46:5000
addons_test.go:388: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-748800 addons disable registry --alsologtostderr -v=1
addons_test.go:388: (dbg) Done: out/minikube-windows-amd64.exe -p addons-748800 addons disable registry --alsologtostderr -v=1: (16.5286324s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-748800 -n addons-748800
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-748800 -n addons-748800: (13.5836791s)
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-748800 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p addons-748800 logs -n 25: (10.378916s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-068600 | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:34 UTC |                     |
	|         | -p download-only-068600                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:34 UTC | 18 Mar 24 10:34 UTC |
	| delete  | -p download-only-068600                                                                     | download-only-068600 | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:34 UTC | 18 Mar 24 10:35 UTC |
	| start   | -o=json --download-only                                                                     | download-only-369300 | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:35 UTC |                     |
	|         | -p download-only-369300                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:35 UTC | 18 Mar 24 10:35 UTC |
	| delete  | -p download-only-369300                                                                     | download-only-369300 | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:35 UTC | 18 Mar 24 10:35 UTC |
	| start   | -o=json --download-only                                                                     | download-only-324500 | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:35 UTC |                     |
	|         | -p download-only-324500                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                                                           |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:35 UTC | 18 Mar 24 10:35 UTC |
	| delete  | -p download-only-324500                                                                     | download-only-324500 | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:35 UTC | 18 Mar 24 10:35 UTC |
	| delete  | -p download-only-068600                                                                     | download-only-068600 | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:35 UTC | 18 Mar 24 10:35 UTC |
	| delete  | -p download-only-369300                                                                     | download-only-369300 | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:35 UTC | 18 Mar 24 10:35 UTC |
	| delete  | -p download-only-324500                                                                     | download-only-324500 | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:35 UTC | 18 Mar 24 10:35 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-301800 | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:35 UTC |                     |
	|         | binary-mirror-301800                                                                        |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |                   |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |                   |         |                     |                     |
	|         | http://127.0.0.1:53180                                                                      |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | -p binary-mirror-301800                                                                     | binary-mirror-301800 | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:35 UTC | 18 Mar 24 10:35 UTC |
	| addons  | enable dashboard -p                                                                         | addons-748800        | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:35 UTC |                     |
	|         | addons-748800                                                                               |                      |                   |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-748800        | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:35 UTC |                     |
	|         | addons-748800                                                                               |                      |                   |         |                     |                     |
	| start   | -p addons-748800 --wait=true                                                                | addons-748800        | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:35 UTC | 18 Mar 24 10:42 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |                   |         |                     |                     |
	|         | --addons=registry                                                                           |                      |                   |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |                   |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |                   |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |                   |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |                   |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |                   |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |                   |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |                   |         |                     |                     |
	|         | --addons=yakd --driver=hyperv                                                               |                      |                   |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |                   |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |                   |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |                   |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-748800        | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:42 UTC | 18 Mar 24 10:42 UTC |
	|         | -p addons-748800                                                                            |                      |                   |         |                     |                     |
	| ssh     | addons-748800 ssh cat                                                                       | addons-748800        | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:42 UTC | 18 Mar 24 10:42 UTC |
	|         | /opt/local-path-provisioner/pvc-78236935-41c5-4ffc-887f-2d2dd3c5a049_default_test-pvc/file1 |                      |                   |         |                     |                     |
	| ip      | addons-748800 ip                                                                            | addons-748800        | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:42 UTC | 18 Mar 24 10:42 UTC |
	| addons  | addons-748800 addons disable                                                                | addons-748800        | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:42 UTC | 18 Mar 24 10:42 UTC |
	|         | registry --alsologtostderr                                                                  |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-748800 addons disable                                                                | addons-748800        | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:42 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | addons-748800 addons                                                                        | addons-748800        | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:42 UTC | 18 Mar 24 10:43 UTC |
	|         | disable metrics-server                                                                      |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 10:35:42
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 10:35:42.631242   13044 out.go:291] Setting OutFile to fd 892 ...
	I0318 10:35:42.632237   13044 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 10:35:42.632237   13044 out.go:304] Setting ErrFile to fd 792...
	I0318 10:35:42.632237   13044 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 10:35:42.654187   13044 out.go:298] Setting JSON to false
	I0318 10:35:42.657571   13044 start.go:129] hostinfo: {"hostname":"minikube6","uptime":133467,"bootTime":1710624675,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0318 10:35:42.657571   13044 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 10:35:42.663590   13044 out.go:177] * [addons-748800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	I0318 10:35:42.667601   13044 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0318 10:35:42.667601   13044 notify.go:220] Checking for updates...
	I0318 10:35:42.670155   13044 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 10:35:42.672899   13044 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0318 10:35:42.677450   13044 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 10:35:42.679421   13044 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 10:35:42.682422   13044 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 10:35:48.283159   13044 out.go:177] * Using the hyperv driver based on user configuration
	I0318 10:35:48.286948   13044 start.go:297] selected driver: hyperv
	I0318 10:35:48.286948   13044 start.go:901] validating driver "hyperv" against <nil>
	I0318 10:35:48.286948   13044 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 10:35:48.337491   13044 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 10:35:48.338705   13044 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 10:35:48.339496   13044 cni.go:84] Creating CNI manager for ""
	I0318 10:35:48.339496   13044 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 10:35:48.339496   13044 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 10:35:48.339496   13044 start.go:340] cluster config:
	{Name:addons-748800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-748800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 10:35:48.339496   13044 iso.go:125] acquiring lock: {Name:mk859ea173f7c19f70b69d7017f4a5a661cd1500 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 10:35:48.343520   13044 out.go:177] * Starting "addons-748800" primary control-plane node in "addons-748800" cluster
	I0318 10:35:48.346623   13044 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 10:35:48.346623   13044 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0318 10:35:48.347157   13044 cache.go:56] Caching tarball of preloaded images
	I0318 10:35:48.347157   13044 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0318 10:35:48.347157   13044 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 10:35:48.347828   13044 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\config.json ...
	I0318 10:35:48.348448   13044 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\config.json: {Name:mk0c1bc4a8868272321c580f84e829c072a4d890 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 10:35:48.349657   13044 start.go:360] acquireMachinesLock for addons-748800: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 10:35:48.349657   13044 start.go:364] duration metric: took 0s to acquireMachinesLock for "addons-748800"
	I0318 10:35:48.349657   13044 start.go:93] Provisioning new machine with config: &{Name:addons-748800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:addons-748800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 10:35:48.350289   13044 start.go:125] createHost starting for "" (driver="hyperv")
	I0318 10:35:48.352893   13044 out.go:204] * Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0318 10:35:48.353742   13044 start.go:159] libmachine.API.Create for "addons-748800" (driver="hyperv")
	I0318 10:35:48.353742   13044 client.go:168] LocalClient.Create starting
	I0318 10:35:48.354619   13044 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0318 10:35:48.599724   13044 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0318 10:35:48.985311   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0318 10:35:51.229201   13044 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0318 10:35:51.229201   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:35:51.229584   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0318 10:35:53.040217   13044 main.go:141] libmachine: [stdout =====>] : False
	
	I0318 10:35:53.041020   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:35:53.041123   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0318 10:35:54.564189   13044 main.go:141] libmachine: [stdout =====>] : True
	
	I0318 10:35:54.564189   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:35:54.564189   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0318 10:35:58.391856   13044 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0318 10:35:58.391856   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:35:58.394171   13044 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0318 10:35:58.840978   13044 main.go:141] libmachine: Creating SSH key...
	I0318 10:35:58.936596   13044 main.go:141] libmachine: Creating VM...
	I0318 10:35:58.937581   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0318 10:36:01.835624   13044 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0318 10:36:01.835812   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:36:01.835883   13044 main.go:141] libmachine: Using switch "Default Switch"
	I0318 10:36:01.835967   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0318 10:36:03.650854   13044 main.go:141] libmachine: [stdout =====>] : True
	
	I0318 10:36:03.650854   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:36:03.650854   13044 main.go:141] libmachine: Creating VHD
	I0318 10:36:03.651749   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-748800\fixed.vhd' -SizeBytes 10MB -Fixed
	I0318 10:36:07.485740   13044 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-748800\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 87EE33EB-232C-4266-80BA-1F7ADB714B67
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0318 10:36:07.485740   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:36:07.485740   13044 main.go:141] libmachine: Writing magic tar header
	I0318 10:36:07.486657   13044 main.go:141] libmachine: Writing SSH key tar header
	I0318 10:36:07.495783   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-748800\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-748800\disk.vhd' -VHDType Dynamic -DeleteSource
	I0318 10:36:10.725312   13044 main.go:141] libmachine: [stdout =====>] : 
	I0318 10:36:10.725312   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:36:10.725312   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-748800\disk.vhd' -SizeBytes 20000MB
	I0318 10:36:13.310001   13044 main.go:141] libmachine: [stdout =====>] : 
	I0318 10:36:13.310001   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:36:13.310391   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM addons-748800 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-748800' -SwitchName 'Default Switch' -MemoryStartupBytes 4000MB
	I0318 10:36:17.673700   13044 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	addons-748800 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0318 10:36:17.673700   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:36:17.674554   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName addons-748800 -DynamicMemoryEnabled $false
	I0318 10:36:19.927245   13044 main.go:141] libmachine: [stdout =====>] : 
	I0318 10:36:19.927766   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:36:19.927766   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor addons-748800 -Count 2
	I0318 10:36:22.104306   13044 main.go:141] libmachine: [stdout =====>] : 
	I0318 10:36:22.105192   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:36:22.105424   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName addons-748800 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-748800\boot2docker.iso'
	I0318 10:36:24.691751   13044 main.go:141] libmachine: [stdout =====>] : 
	I0318 10:36:24.691751   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:36:24.692410   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName addons-748800 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-748800\disk.vhd'
	I0318 10:36:27.445212   13044 main.go:141] libmachine: [stdout =====>] : 
	I0318 10:36:27.445212   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:36:27.445212   13044 main.go:141] libmachine: Starting VM...
	I0318 10:36:27.445212   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM addons-748800
	I0318 10:36:30.627911   13044 main.go:141] libmachine: [stdout =====>] : 
	I0318 10:36:30.627911   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:36:30.627911   13044 main.go:141] libmachine: Waiting for host to start...
	I0318 10:36:30.627911   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:36:32.925616   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:36:32.925616   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:36:32.926627   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-748800 ).networkadapters[0]).ipaddresses[0]
	I0318 10:36:35.553625   13044 main.go:141] libmachine: [stdout =====>] : 
	I0318 10:36:35.554042   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:36:36.564129   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:36:38.755702   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:36:38.756056   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:36:38.756056   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-748800 ).networkadapters[0]).ipaddresses[0]
	I0318 10:36:41.328275   13044 main.go:141] libmachine: [stdout =====>] : 
	I0318 10:36:41.329054   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:36:42.332702   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:36:44.563162   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:36:44.563273   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:36:44.563340   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-748800 ).networkadapters[0]).ipaddresses[0]
	I0318 10:36:47.124897   13044 main.go:141] libmachine: [stdout =====>] : 
	I0318 10:36:47.125320   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:36:48.132419   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:36:50.379286   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:36:50.379286   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:36:50.379286   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-748800 ).networkadapters[0]).ipaddresses[0]
	I0318 10:36:52.935337   13044 main.go:141] libmachine: [stdout =====>] : 
	I0318 10:36:52.935337   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:36:53.941214   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:36:56.219836   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:36:56.220673   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:36:56.220673   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-748800 ).networkadapters[0]).ipaddresses[0]
	I0318 10:36:58.839235   13044 main.go:141] libmachine: [stdout =====>] : 172.25.150.46
	
	I0318 10:36:58.839461   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:36:58.839461   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:37:01.041672   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:37:01.041976   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:37:01.041976   13044 machine.go:94] provisionDockerMachine start ...
	I0318 10:37:01.042141   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:37:03.230069   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:37:03.230069   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:37:03.230157   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-748800 ).networkadapters[0]).ipaddresses[0]
	I0318 10:37:05.836555   13044 main.go:141] libmachine: [stdout =====>] : 172.25.150.46
	
	I0318 10:37:05.836927   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:37:05.842818   13044 main.go:141] libmachine: Using SSH client type: native
	I0318 10:37:05.855737   13044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.150.46 22 <nil> <nil>}
	I0318 10:37:05.855737   13044 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 10:37:05.991632   13044 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 10:37:05.991632   13044 buildroot.go:166] provisioning hostname "addons-748800"
	I0318 10:37:05.991632   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:37:08.198869   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:37:08.198869   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:37:08.199255   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-748800 ).networkadapters[0]).ipaddresses[0]
	I0318 10:37:10.769630   13044 main.go:141] libmachine: [stdout =====>] : 172.25.150.46
	
	I0318 10:37:10.769630   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:37:10.775967   13044 main.go:141] libmachine: Using SSH client type: native
	I0318 10:37:10.776051   13044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.150.46 22 <nil> <nil>}
	I0318 10:37:10.776051   13044 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-748800 && echo "addons-748800" | sudo tee /etc/hostname
	I0318 10:37:10.937713   13044 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-748800
	
	I0318 10:37:10.937713   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:37:13.118658   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:37:13.118737   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:37:13.118737   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-748800 ).networkadapters[0]).ipaddresses[0]
	I0318 10:37:15.792050   13044 main.go:141] libmachine: [stdout =====>] : 172.25.150.46
	
	I0318 10:37:15.792365   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:37:15.801311   13044 main.go:141] libmachine: Using SSH client type: native
	I0318 10:37:15.802052   13044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.150.46 22 <nil> <nil>}
	I0318 10:37:15.802052   13044 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-748800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-748800/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-748800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 10:37:15.969357   13044 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 10:37:15.969357   13044 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0318 10:37:15.969357   13044 buildroot.go:174] setting up certificates
	I0318 10:37:15.969357   13044 provision.go:84] configureAuth start
	I0318 10:37:15.969905   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:37:18.103877   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:37:18.103877   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:37:18.103877   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-748800 ).networkadapters[0]).ipaddresses[0]
	I0318 10:37:20.669082   13044 main.go:141] libmachine: [stdout =====>] : 172.25.150.46
	
	I0318 10:37:20.670107   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:37:20.670216   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:37:22.805599   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:37:22.806536   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:37:22.806536   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-748800 ).networkadapters[0]).ipaddresses[0]
	I0318 10:37:25.363571   13044 main.go:141] libmachine: [stdout =====>] : 172.25.150.46
	
	I0318 10:37:25.363571   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:37:25.364086   13044 provision.go:143] copyHostCerts
	I0318 10:37:25.364740   13044 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0318 10:37:25.365883   13044 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0318 10:37:25.367524   13044 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0318 10:37:25.368495   13044 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-748800 san=[127.0.0.1 172.25.150.46 addons-748800 localhost minikube]
	I0318 10:37:25.536282   13044 provision.go:177] copyRemoteCerts
	I0318 10:37:25.548925   13044 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 10:37:25.548925   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:37:27.751081   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:37:27.751081   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:37:27.751081   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-748800 ).networkadapters[0]).ipaddresses[0]
	I0318 10:37:30.327748   13044 main.go:141] libmachine: [stdout =====>] : 172.25.150.46
	
	I0318 10:37:30.327748   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:37:30.328908   13044 sshutil.go:53] new ssh client: &{IP:172.25.150.46 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-748800\id_rsa Username:docker}
	I0318 10:37:30.444917   13044 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8959006s)
	I0318 10:37:30.445519   13044 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0318 10:37:30.491303   13044 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 10:37:30.539599   13044 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 10:37:30.589454   13044 provision.go:87] duration metric: took 14.6198866s to configureAuth
	I0318 10:37:30.589454   13044 buildroot.go:189] setting minikube options for container-runtime
	I0318 10:37:30.590140   13044 config.go:182] Loaded profile config "addons-748800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 10:37:30.590237   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:37:32.764637   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:37:32.764637   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:37:32.765452   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-748800 ).networkadapters[0]).ipaddresses[0]
	I0318 10:37:35.292490   13044 main.go:141] libmachine: [stdout =====>] : 172.25.150.46
	
	I0318 10:37:35.292490   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:37:35.299368   13044 main.go:141] libmachine: Using SSH client type: native
	I0318 10:37:35.300006   13044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.150.46 22 <nil> <nil>}
	I0318 10:37:35.300068   13044 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0318 10:37:35.431668   13044 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0318 10:37:35.431786   13044 buildroot.go:70] root file system type: tmpfs
	I0318 10:37:35.432001   13044 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0318 10:37:35.432001   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:37:37.605904   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:37:37.605904   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:37:37.605904   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-748800 ).networkadapters[0]).ipaddresses[0]
	I0318 10:37:40.185681   13044 main.go:141] libmachine: [stdout =====>] : 172.25.150.46
	
	I0318 10:37:40.185681   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:37:40.193648   13044 main.go:141] libmachine: Using SSH client type: native
	I0318 10:37:40.193833   13044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.150.46 22 <nil> <nil>}
	I0318 10:37:40.193833   13044 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0318 10:37:40.359245   13044 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0318 10:37:40.359437   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:37:42.590998   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:37:42.590998   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:37:42.591672   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-748800 ).networkadapters[0]).ipaddresses[0]
	I0318 10:37:45.189221   13044 main.go:141] libmachine: [stdout =====>] : 172.25.150.46
	
	I0318 10:37:45.190043   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:37:45.196616   13044 main.go:141] libmachine: Using SSH client type: native
	I0318 10:37:45.196616   13044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.150.46 22 <nil> <nil>}
	I0318 10:37:45.196616   13044 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0318 10:37:47.377074   13044 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0318 10:37:47.377074   13044 machine.go:97] duration metric: took 46.3348149s to provisionDockerMachine
	I0318 10:37:47.377074   13044 client.go:171] duration metric: took 1m59.0226061s to LocalClient.Create
	I0318 10:37:47.377659   13044 start.go:167] duration metric: took 1m59.023143s to libmachine.API.Create "addons-748800"
	I0318 10:37:47.377731   13044 start.go:293] postStartSetup for "addons-748800" (driver="hyperv")
	I0318 10:37:47.377788   13044 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 10:37:47.390852   13044 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 10:37:47.390852   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:37:49.517717   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:37:49.517717   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:37:49.517717   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-748800 ).networkadapters[0]).ipaddresses[0]
	I0318 10:37:52.089469   13044 main.go:141] libmachine: [stdout =====>] : 172.25.150.46
	
	I0318 10:37:52.089573   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:37:52.090508   13044 sshutil.go:53] new ssh client: &{IP:172.25.150.46 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-748800\id_rsa Username:docker}
	I0318 10:37:52.205947   13044 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8150651s)
	I0318 10:37:52.220284   13044 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 10:37:52.227919   13044 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 10:37:52.228208   13044 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0318 10:37:52.228755   13044 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0318 10:37:52.229108   13044 start.go:296] duration metric: took 4.8512902s for postStartSetup
	I0318 10:37:52.233070   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:37:54.372524   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:37:54.372524   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:37:54.372985   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-748800 ).networkadapters[0]).ipaddresses[0]
	I0318 10:37:56.952548   13044 main.go:141] libmachine: [stdout =====>] : 172.25.150.46
	
	I0318 10:37:56.952548   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:37:56.953049   13044 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\config.json ...
	I0318 10:37:56.955937   13044 start.go:128] duration metric: took 2m8.6048178s to createHost
	I0318 10:37:56.955937   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:37:59.131658   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:37:59.131658   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:37:59.132203   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-748800 ).networkadapters[0]).ipaddresses[0]
	I0318 10:38:01.685739   13044 main.go:141] libmachine: [stdout =====>] : 172.25.150.46
	
	I0318 10:38:01.685739   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:38:01.693147   13044 main.go:141] libmachine: Using SSH client type: native
	I0318 10:38:01.693744   13044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.150.46 22 <nil> <nil>}
	I0318 10:38:01.693744   13044 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 10:38:01.833788   13044 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710758281.823852690
	
	I0318 10:38:01.833788   13044 fix.go:216] guest clock: 1710758281.823852690
	I0318 10:38:01.833788   13044 fix.go:229] Guest: 2024-03-18 10:38:01.82385269 +0000 UTC Remote: 2024-03-18 10:37:56.9559372 +0000 UTC m=+134.492347101 (delta=4.86791549s)
	I0318 10:38:01.833788   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:38:03.979925   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:38:03.980228   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:38:03.980295   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-748800 ).networkadapters[0]).ipaddresses[0]
	I0318 10:38:06.568125   13044 main.go:141] libmachine: [stdout =====>] : 172.25.150.46
	
	I0318 10:38:06.568125   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:38:06.573040   13044 main.go:141] libmachine: Using SSH client type: native
	I0318 10:38:06.573925   13044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.150.46 22 <nil> <nil>}
	I0318 10:38:06.573925   13044 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1710758281
	I0318 10:38:06.716108   13044 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Mar 18 10:38:01 UTC 2024
	
	I0318 10:38:06.716108   13044 fix.go:236] clock set: Mon Mar 18 10:38:01 UTC 2024
	 (err=<nil>)
	I0318 10:38:06.716108   13044 start.go:83] releasing machines lock for "addons-748800", held for 2m18.3656072s
	I0318 10:38:06.716108   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:38:08.880189   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:38:08.880799   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:38:08.880799   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-748800 ).networkadapters[0]).ipaddresses[0]
	I0318 10:38:11.494678   13044 main.go:141] libmachine: [stdout =====>] : 172.25.150.46
	
	I0318 10:38:11.494678   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:38:11.499073   13044 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 10:38:11.499073   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:38:11.510627   13044 ssh_runner.go:195] Run: cat /version.json
	I0318 10:38:11.511192   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:38:13.735477   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:38:13.735477   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:38:13.735861   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-748800 ).networkadapters[0]).ipaddresses[0]
	I0318 10:38:13.770808   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:38:13.770808   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:38:13.771009   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-748800 ).networkadapters[0]).ipaddresses[0]
	I0318 10:38:16.448080   13044 main.go:141] libmachine: [stdout =====>] : 172.25.150.46
	
	I0318 10:38:16.448080   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:38:16.448080   13044 sshutil.go:53] new ssh client: &{IP:172.25.150.46 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-748800\id_rsa Username:docker}
	I0318 10:38:16.471397   13044 main.go:141] libmachine: [stdout =====>] : 172.25.150.46
	
	I0318 10:38:16.471397   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:38:16.471926   13044 sshutil.go:53] new ssh client: &{IP:172.25.150.46 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-748800\id_rsa Username:docker}
	I0318 10:38:16.681303   13044 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.182198s)
	I0318 10:38:16.681303   13044 ssh_runner.go:235] Completed: cat /version.json: (5.1706449s)
	I0318 10:38:16.693970   13044 ssh_runner.go:195] Run: systemctl --version
	I0318 10:38:16.715994   13044 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 10:38:16.724315   13044 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 10:38:16.734768   13044 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 10:38:16.764415   13044 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 10:38:16.764415   13044 start.go:494] detecting cgroup driver to use...
	I0318 10:38:16.764917   13044 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 10:38:16.812149   13044 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0318 10:38:16.842451   13044 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0318 10:38:16.860043   13044 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0318 10:38:16.872937   13044 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0318 10:38:16.906019   13044 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 10:38:16.941594   13044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0318 10:38:16.974188   13044 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 10:38:17.006052   13044 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 10:38:17.038158   13044 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0318 10:38:17.069269   13044 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 10:38:17.097855   13044 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 10:38:17.126953   13044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 10:38:17.330177   13044 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0318 10:38:17.359841   13044 start.go:494] detecting cgroup driver to use...
	I0318 10:38:17.372071   13044 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0318 10:38:17.411463   13044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 10:38:17.446461   13044 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 10:38:17.487820   13044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 10:38:17.528112   13044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 10:38:17.566643   13044 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0318 10:38:17.639122   13044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 10:38:17.665021   13044 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 10:38:17.712938   13044 ssh_runner.go:195] Run: which cri-dockerd
	I0318 10:38:17.730595   13044 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0318 10:38:17.748204   13044 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0318 10:38:17.792618   13044 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0318 10:38:17.997254   13044 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0318 10:38:18.203228   13044 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0318 10:38:18.203378   13044 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0318 10:38:18.250419   13044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 10:38:18.450717   13044 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 10:38:20.955951   13044 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5052189s)
	I0318 10:38:20.969990   13044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0318 10:38:21.006857   13044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 10:38:21.040523   13044 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0318 10:38:21.249408   13044 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0318 10:38:21.456272   13044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 10:38:21.648367   13044 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0318 10:38:21.691115   13044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 10:38:21.726034   13044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 10:38:21.917257   13044 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0318 10:38:22.024883   13044 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0318 10:38:22.037053   13044 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0318 10:38:22.045583   13044 start.go:562] Will wait 60s for crictl version
	I0318 10:38:22.057521   13044 ssh_runner.go:195] Run: which crictl
	I0318 10:38:22.077451   13044 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 10:38:22.161147   13044 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.4
	RuntimeApiVersion:  v1
	I0318 10:38:22.171459   13044 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 10:38:22.226461   13044 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 10:38:22.262737   13044 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	I0318 10:38:22.262737   13044 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0318 10:38:22.268547   13044 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0318 10:38:22.268547   13044 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0318 10:38:22.268625   13044 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0318 10:38:22.268625   13044 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:ae:0d:2c Flags:up|broadcast|multicast|running}
	I0318 10:38:22.271270   13044 ip.go:210] interface addr: fe80::f8a6:d6b6:cc4:1ba0/64
	I0318 10:38:22.271270   13044 ip.go:210] interface addr: 172.25.144.1/20
	I0318 10:38:22.281811   13044 ssh_runner.go:195] Run: grep 172.25.144.1	host.minikube.internal$ /etc/hosts
	I0318 10:38:22.287932   13044 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 10:38:22.310170   13044 kubeadm.go:877] updating cluster {Name:addons-748800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.2
8.4 ClusterName:addons-748800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.150.46 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 10:38:22.310366   13044 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 10:38:22.321257   13044 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 10:38:22.348096   13044 docker.go:685] Got preloaded images: 
	I0318 10:38:22.348096   13044 docker.go:691] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0318 10:38:22.362296   13044 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0318 10:38:22.392880   13044 ssh_runner.go:195] Run: which lz4
	I0318 10:38:22.411221   13044 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 10:38:22.418357   13044 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 10:38:22.418357   13044 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I0318 10:38:24.590172   13044 docker.go:649] duration metric: took 2.1902758s to copy over tarball
	I0318 10:38:24.604401   13044 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 10:38:31.906724   13044 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (7.3022778s)
	I0318 10:38:31.906724   13044 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 10:38:31.979688   13044 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0318 10:38:32.002183   13044 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0318 10:38:32.050583   13044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 10:38:32.266515   13044 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 10:38:37.959058   13044 ssh_runner.go:235] Completed: sudo systemctl restart docker: (5.6925088s)
	I0318 10:38:37.968460   13044 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 10:38:37.998054   13044 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0318 10:38:37.998204   13044 cache_images.go:84] Images are preloaded, skipping loading
	I0318 10:38:37.998204   13044 kubeadm.go:928] updating node { 172.25.150.46 8443 v1.28.4 docker true true} ...
	I0318 10:38:37.998607   13044 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-748800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.150.46
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-748800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 10:38:38.008292   13044 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0318 10:38:38.053457   13044 cni.go:84] Creating CNI manager for ""
	I0318 10:38:38.053516   13044 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 10:38:38.053516   13044 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 10:38:38.053516   13044 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.25.150.46 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-748800 NodeName:addons-748800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.25.150.46"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.25.150.46 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 10:38:38.053891   13044 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.25.150.46
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-748800"
	  kubeletExtraArgs:
	    node-ip: 172.25.150.46
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.25.150.46"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 10:38:38.067931   13044 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 10:38:38.086478   13044 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 10:38:38.097460   13044 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 10:38:38.116279   13044 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0318 10:38:38.161411   13044 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 10:38:38.195523   13044 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0318 10:38:38.242335   13044 ssh_runner.go:195] Run: grep 172.25.150.46	control-plane.minikube.internal$ /etc/hosts
	I0318 10:38:38.248589   13044 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.150.46	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 10:38:38.284783   13044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 10:38:38.482016   13044 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 10:38:38.515148   13044 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800 for IP: 172.25.150.46
	I0318 10:38:38.515211   13044 certs.go:194] generating shared ca certs ...
	I0318 10:38:38.515211   13044 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 10:38:38.515561   13044 certs.go:240] generating "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0318 10:38:38.686175   13044 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt ...
	I0318 10:38:38.686175   13044 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt: {Name:mkb0ebdce3b528a3c449211fdfbba2d86c130c96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 10:38:38.687714   13044 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key ...
	I0318 10:38:38.687714   13044 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key: {Name:mk1ec59eaa4c2f7a35370569c3fc13a80bc1499d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 10:38:38.688285   13044 certs.go:240] generating "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0318 10:38:38.971495   13044 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt ...
	I0318 10:38:38.971495   13044 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mk78efc1a7bd38719c2f7a853f9109f9a1a3252e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 10:38:38.973625   13044 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key ...
	I0318 10:38:38.973625   13044 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key: {Name:mk57de77abeaf23b535083770f5522a07b562b59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 10:38:38.973859   13044 certs.go:256] generating profile certs ...
	I0318 10:38:38.975030   13044 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.key
	I0318 10:38:38.975030   13044 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt with IP's: []
	I0318 10:38:39.044790   13044 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt ...
	I0318 10:38:39.044790   13044 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: {Name:mk34d30134b3f3c688c22a515c93972d941b558b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 10:38:39.046462   13044 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.key ...
	I0318 10:38:39.046580   13044 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.key: {Name:mkc846999cbd43f372cd0a47252e6372c5c271d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 10:38:39.047697   13044 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\apiserver.key.e8c6145b
	I0318 10:38:39.047911   13044 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\apiserver.crt.e8c6145b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.150.46]
	I0318 10:38:39.362438   13044 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\apiserver.crt.e8c6145b ...
	I0318 10:38:39.362438   13044 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\apiserver.crt.e8c6145b: {Name:mk714b5efe52911bb410a56cd8f13af4fc9b517e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 10:38:39.363435   13044 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\apiserver.key.e8c6145b ...
	I0318 10:38:39.363435   13044 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\apiserver.key.e8c6145b: {Name:mka02c6dd123e710c6c2da99f027d09f33271ae2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 10:38:39.364606   13044 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\apiserver.crt.e8c6145b -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\apiserver.crt
	I0318 10:38:39.375502   13044 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\apiserver.key.e8c6145b -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\apiserver.key
	I0318 10:38:39.376530   13044 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\proxy-client.key
	I0318 10:38:39.376530   13044 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\proxy-client.crt with IP's: []
	I0318 10:38:39.891080   13044 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\proxy-client.crt ...
	I0318 10:38:39.891080   13044 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\proxy-client.crt: {Name:mka22de4d0dc08c84ddc6798fe1f8fae177d12ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 10:38:39.892963   13044 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\proxy-client.key ...
	I0318 10:38:39.892963   13044 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\proxy-client.key: {Name:mk80b776c46f2cccb3df038e582eb50b342940c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 10:38:39.904042   13044 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0318 10:38:39.904042   13044 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0318 10:38:39.905249   13044 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0318 10:38:39.905249   13044 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0318 10:38:39.908239   13044 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 10:38:39.959438   13044 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 10:38:40.005971   13044 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 10:38:40.048490   13044 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 10:38:40.087198   13044 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0318 10:38:40.140441   13044 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 10:38:40.189859   13044 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 10:38:40.239399   13044 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 10:38:40.289781   13044 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 10:38:40.337789   13044 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 10:38:40.383728   13044 ssh_runner.go:195] Run: openssl version
	I0318 10:38:40.406242   13044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 10:38:40.439795   13044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 10:38:40.448481   13044 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I0318 10:38:40.461931   13044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 10:38:40.486872   13044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 10:38:40.520161   13044 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 10:38:40.527144   13044 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 10:38:40.527414   13044 kubeadm.go:391] StartCluster: {Name:addons-748800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4
ClusterName:addons-748800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.150.46 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 10:38:40.538321   13044 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 10:38:40.578735   13044 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0318 10:38:40.608322   13044 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 10:38:40.638329   13044 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 10:38:40.658292   13044 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 10:38:40.658373   13044 kubeadm.go:156] found existing configuration files:
	
	I0318 10:38:40.670472   13044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 10:38:40.688767   13044 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 10:38:40.701725   13044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 10:38:40.732618   13044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 10:38:40.752421   13044 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 10:38:40.765447   13044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 10:38:40.795075   13044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 10:38:40.810031   13044 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 10:38:40.820900   13044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 10:38:40.848508   13044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 10:38:40.867490   13044 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 10:38:40.880322   13044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 10:38:40.898685   13044 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 10:38:41.182141   13044 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 10:38:56.405923   13044 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 10:38:56.405923   13044 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 10:38:56.405923   13044 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 10:38:56.405923   13044 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 10:38:56.406910   13044 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 10:38:56.406910   13044 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 10:38:56.408929   13044 out.go:204]   - Generating certificates and keys ...
	I0318 10:38:56.409924   13044 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 10:38:56.409924   13044 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 10:38:56.409924   13044 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0318 10:38:56.409924   13044 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0318 10:38:56.409924   13044 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0318 10:38:56.409924   13044 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0318 10:38:56.409924   13044 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0318 10:38:56.410919   13044 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-748800 localhost] and IPs [172.25.150.46 127.0.0.1 ::1]
	I0318 10:38:56.410919   13044 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0318 10:38:56.410919   13044 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-748800 localhost] and IPs [172.25.150.46 127.0.0.1 ::1]
	I0318 10:38:56.410919   13044 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0318 10:38:56.410919   13044 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0318 10:38:56.410919   13044 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0318 10:38:56.410919   13044 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 10:38:56.410919   13044 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 10:38:56.412095   13044 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 10:38:56.412095   13044 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 10:38:56.412095   13044 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 10:38:56.412095   13044 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 10:38:56.412095   13044 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 10:38:56.415957   13044 out.go:204]   - Booting up control plane ...
	I0318 10:38:56.415957   13044 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 10:38:56.415957   13044 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 10:38:56.415957   13044 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 10:38:56.415957   13044 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 10:38:56.416917   13044 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 10:38:56.416917   13044 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 10:38:56.416917   13044 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 10:38:56.416917   13044 kubeadm.go:309] [apiclient] All control plane components are healthy after 9.004549 seconds
	I0318 10:38:56.416917   13044 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 10:38:56.417921   13044 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 10:38:56.417921   13044 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 10:38:56.417921   13044 kubeadm.go:309] [mark-control-plane] Marking the node addons-748800 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 10:38:56.417921   13044 kubeadm.go:309] [bootstrap-token] Using token: vf558w.iml0m7o1wsc1wep5
	I0318 10:38:56.420919   13044 out.go:204]   - Configuring RBAC rules ...
	I0318 10:38:56.421949   13044 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 10:38:56.421949   13044 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 10:38:56.421949   13044 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 10:38:56.421949   13044 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 10:38:56.422948   13044 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 10:38:56.422948   13044 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 10:38:56.422948   13044 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 10:38:56.422948   13044 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 10:38:56.422948   13044 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 10:38:56.422948   13044 kubeadm.go:309] 
	I0318 10:38:56.423955   13044 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 10:38:56.423955   13044 kubeadm.go:309] 
	I0318 10:38:56.423955   13044 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 10:38:56.423955   13044 kubeadm.go:309] 
	I0318 10:38:56.423955   13044 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 10:38:56.423955   13044 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 10:38:56.423955   13044 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 10:38:56.423955   13044 kubeadm.go:309] 
	I0318 10:38:56.423955   13044 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 10:38:56.423955   13044 kubeadm.go:309] 
	I0318 10:38:56.423955   13044 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 10:38:56.423955   13044 kubeadm.go:309] 
	I0318 10:38:56.423955   13044 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 10:38:56.424926   13044 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 10:38:56.424926   13044 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 10:38:56.424926   13044 kubeadm.go:309] 
	I0318 10:38:56.424926   13044 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 10:38:56.424926   13044 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 10:38:56.424926   13044 kubeadm.go:309] 
	I0318 10:38:56.424926   13044 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token vf558w.iml0m7o1wsc1wep5 \
	I0318 10:38:56.424926   13044 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1315b336657f971045d436062c4002c5bfe51c3e72afc075449943f75abc0cef \
	I0318 10:38:56.424926   13044 kubeadm.go:309] 	--control-plane 
	I0318 10:38:56.424926   13044 kubeadm.go:309] 
	I0318 10:38:56.425914   13044 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 10:38:56.425914   13044 kubeadm.go:309] 
	I0318 10:38:56.425914   13044 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token vf558w.iml0m7o1wsc1wep5 \
	I0318 10:38:56.425914   13044 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1315b336657f971045d436062c4002c5bfe51c3e72afc075449943f75abc0cef 
	I0318 10:38:56.425914   13044 cni.go:84] Creating CNI manager for ""
	I0318 10:38:56.425914   13044 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 10:38:56.431997   13044 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 10:38:56.446914   13044 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 10:38:56.471501   13044 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 10:38:56.526343   13044 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 10:38:56.541605   13044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-748800 minikube.k8s.io/updated_at=2024_03_18T10_38_56_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd minikube.k8s.io/name=addons-748800 minikube.k8s.io/primary=true
	I0318 10:38:56.541605   13044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 10:38:56.587620   13044 ops.go:34] apiserver oom_adj: -16
	I0318 10:38:56.932615   13044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 10:38:57.436211   13044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 10:38:57.944843   13044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 10:38:58.436835   13044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 10:38:58.937765   13044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 10:38:59.441249   13044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 10:38:59.947135   13044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 10:39:00.448806   13044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 10:39:00.936719   13044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 10:39:01.439601   13044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 10:39:01.946201   13044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 10:39:02.439476   13044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 10:39:02.945974   13044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 10:39:03.436068   13044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 10:39:03.945249   13044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 10:39:04.446704   13044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 10:39:04.940907   13044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 10:39:05.448243   13044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 10:39:05.939139   13044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 10:39:06.447626   13044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 10:39:06.936034   13044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 10:39:07.442160   13044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 10:39:07.936615   13044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 10:39:08.440906   13044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 10:39:08.569382   13044 kubeadm.go:1107] duration metric: took 12.0428858s to wait for elevateKubeSystemPrivileges
	W0318 10:39:08.569530   13044 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 10:39:08.569637   13044 kubeadm.go:393] duration metric: took 28.0420274s to StartCluster
	I0318 10:39:08.569666   13044 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 10:39:08.569666   13044 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0318 10:39:08.570743   13044 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 10:39:08.572287   13044 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0318 10:39:08.572287   13044 start.go:234] Will wait 6m0s for node &{Name: IP:172.25.150.46 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 10:39:08.572287   13044 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0318 10:39:08.575331   13044 out.go:177] * Verifying Kubernetes components...
	I0318 10:39:08.572287   13044 addons.go:69] Setting yakd=true in profile "addons-748800"
	I0318 10:39:08.572287   13044 addons.go:69] Setting helm-tiller=true in profile "addons-748800"
	I0318 10:39:08.572287   13044 addons.go:69] Setting ingress=true in profile "addons-748800"
	I0318 10:39:08.572287   13044 addons.go:69] Setting cloud-spanner=true in profile "addons-748800"
	I0318 10:39:08.572287   13044 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-748800"
	I0318 10:39:08.572287   13044 addons.go:69] Setting ingress-dns=true in profile "addons-748800"
	I0318 10:39:08.572287   13044 addons.go:69] Setting default-storageclass=true in profile "addons-748800"
	I0318 10:39:08.572287   13044 addons.go:69] Setting gcp-auth=true in profile "addons-748800"
	I0318 10:39:08.572287   13044 addons.go:69] Setting inspektor-gadget=true in profile "addons-748800"
	I0318 10:39:08.572287   13044 addons.go:69] Setting registry=true in profile "addons-748800"
	I0318 10:39:08.572287   13044 addons.go:69] Setting storage-provisioner=true in profile "addons-748800"
	I0318 10:39:08.572287   13044 addons.go:69] Setting metrics-server=true in profile "addons-748800"
	I0318 10:39:08.572287   13044 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-748800"
	I0318 10:39:08.572287   13044 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-748800"
	I0318 10:39:08.572287   13044 addons.go:69] Setting volumesnapshots=true in profile "addons-748800"
	I0318 10:39:08.572287   13044 config.go:182] Loaded profile config "addons-748800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 10:39:08.597671   13044 addons.go:234] Setting addon yakd=true in "addons-748800"
	I0318 10:39:08.597962   13044 addons.go:234] Setting addon inspektor-gadget=true in "addons-748800"
	I0318 10:39:08.598148   13044 host.go:66] Checking if "addons-748800" exists ...
	I0318 10:39:08.598148   13044 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-748800"
	I0318 10:39:08.598148   13044 host.go:66] Checking if "addons-748800" exists ...
	I0318 10:39:08.598243   13044 host.go:66] Checking if "addons-748800" exists ...
	I0318 10:39:08.598148   13044 addons.go:234] Setting addon metrics-server=true in "addons-748800"
	I0318 10:39:08.598415   13044 addons.go:234] Setting addon volumesnapshots=true in "addons-748800"
	I0318 10:39:08.598512   13044 host.go:66] Checking if "addons-748800" exists ...
	I0318 10:39:08.597962   13044 addons.go:234] Setting addon storage-provisioner=true in "addons-748800"
	I0318 10:39:08.599055   13044 host.go:66] Checking if "addons-748800" exists ...
	I0318 10:39:08.598512   13044 host.go:66] Checking if "addons-748800" exists ...
	I0318 10:39:08.597704   13044 addons.go:234] Setting addon ingress=true in "addons-748800"
	I0318 10:39:08.599584   13044 host.go:66] Checking if "addons-748800" exists ...
	I0318 10:39:08.597704   13044 addons.go:234] Setting addon ingress-dns=true in "addons-748800"
	I0318 10:39:08.597911   13044 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-748800"
	I0318 10:39:08.600292   13044 host.go:66] Checking if "addons-748800" exists ...
	I0318 10:39:08.597962   13044 mustload.go:65] Loading cluster: addons-748800
	I0318 10:39:08.600742   13044 config.go:182] Loaded profile config "addons-748800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 10:39:08.597962   13044 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-748800"
	I0318 10:39:08.601805   13044 host.go:66] Checking if "addons-748800" exists ...
	I0318 10:39:08.597704   13044 addons.go:234] Setting addon cloud-spanner=true in "addons-748800"
	I0318 10:39:08.602153   13044 host.go:66] Checking if "addons-748800" exists ...
	I0318 10:39:08.597962   13044 addons.go:234] Setting addon registry=true in "addons-748800"
	I0318 10:39:08.602409   13044 host.go:66] Checking if "addons-748800" exists ...
	I0318 10:39:08.602570   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:39:08.602859   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:39:08.598148   13044 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-748800"
	I0318 10:39:08.597704   13044 addons.go:234] Setting addon helm-tiller=true in "addons-748800"
	I0318 10:39:08.603183   13044 host.go:66] Checking if "addons-748800" exists ...
	I0318 10:39:08.609683   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:39:08.609683   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:39:08.610996   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:39:08.611657   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:39:08.611657   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:39:08.611952   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:39:08.617397   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:39:08.617397   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:39:08.617397   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:39:08.617397   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:39:08.618849   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:39:08.619246   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:39:08.619794   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:39:08.619794   13044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 10:39:09.819017   13044 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.2467224s)
	I0318 10:39:09.819644   13044 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.25.144.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0318 10:39:10.214420   13044 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.5946161s)
	I0318 10:39:10.237469   13044 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 10:39:14.766167   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:39:14.766167   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:14.777162   13044 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0318 10:39:14.780182   13044 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0318 10:39:14.780182   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0318 10:39:14.780182   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:39:15.225174   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:39:15.225174   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:15.228177   13044 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0318 10:39:15.232418   13044 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 10:39:15.232507   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 10:39:15.232630   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:39:15.312051   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:39:15.312051   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:15.312245   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:39:15.312245   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:15.312245   13044 host.go:66] Checking if "addons-748800" exists ...
	I0318 10:39:15.318499   13044 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-748800"
	I0318 10:39:15.318499   13044 host.go:66] Checking if "addons-748800" exists ...
	I0318 10:39:15.319632   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:39:15.520868   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:39:15.520868   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:15.523886   13044 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0318 10:39:15.529711   13044 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0318 10:39:15.527452   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:39:15.529711   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:15.534989   13044 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0318 10:39:15.541752   13044 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0318 10:39:15.541752   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0318 10:39:15.541752   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:39:15.539652   13044 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0318 10:39:15.553060   13044 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0318 10:39:15.553060   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:39:15.559574   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:15.559574   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:39:15.559574   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:15.561589   13044 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0318 10:39:15.564580   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:39:15.567906   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:15.570367   13044 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0318 10:39:15.567906   13044 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0318 10:39:15.566060   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:39:15.566060   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:39:15.566770   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:39:15.567716   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:39:15.559574   13044 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0318 10:39:15.570405   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0318 10:39:15.572445   13044 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0318 10:39:15.572979   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:15.572979   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:15.573026   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:15.573026   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:15.575384   13044 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0318 10:39:15.580961   13044 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0318 10:39:15.577884   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0318 10:39:15.575384   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:39:15.579381   13044 addons.go:234] Setting addon default-storageclass=true in "addons-748800"
	I0318 10:39:15.579559   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:39:15.580961   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:39:15.585405   13044 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0318 10:39:15.589599   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:39:15.607809   13044 out.go:177]   - Using image docker.io/registry:2.8.3
	I0318 10:39:15.610812   13044 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0318 10:39:15.610812   13044 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0318 10:39:15.610812   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:15.611480   13044 host.go:66] Checking if "addons-748800" exists ...
	I0318 10:39:15.619251   13044 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0318 10:39:15.614212   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0318 10:39:15.614212   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:15.618222   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:39:15.624470   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:39:15.624470   13044 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0318 10:39:15.634193   13044 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 10:39:15.638146   13044 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0318 10:39:15.638146   13044 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0318 10:39:15.638146   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0318 10:39:15.643083   13044 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0318 10:39:15.646146   13044 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0318 10:39:15.645177   13044 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0
	I0318 10:39:15.645177   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:39:15.653350   13044 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0318 10:39:15.653350   13044 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0318 10:39:15.665057   13044 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 10:39:15.667280   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0318 10:39:15.667422   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0318 10:39:15.669025   13044 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0318 10:39:15.669843   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0318 10:39:15.669843   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:39:15.672850   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 10:39:15.672850   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:39:15.676492   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:39:15.677293   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:39:15.678180   13044 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0318 10:39:15.678180   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0318 10:39:15.678180   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:39:19.105825   13044 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.25.144.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (9.2861247s)
	I0318 10:39:19.105825   13044 start.go:948] {"host.minikube.internal": 172.25.144.1} host record injected into CoreDNS's ConfigMap
	I0318 10:39:19.109387   13044 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.8718636s)
	I0318 10:39:19.112813   13044 node_ready.go:35] waiting up to 6m0s for node "addons-748800" to be "Ready" ...
	I0318 10:39:19.307928   13044 node_ready.go:49] node "addons-748800" has status "Ready":"True"
	I0318 10:39:19.307928   13044 node_ready.go:38] duration metric: took 195.1144ms for node "addons-748800" to be "Ready" ...
	I0318 10:39:19.307928   13044 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 10:39:19.684925   13044 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-kbxll" in "kube-system" namespace to be "Ready" ...
	I0318 10:39:19.803051   13044 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-748800" context rescaled to 1 replicas
	I0318 10:39:21.165999   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:39:21.165999   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:21.165999   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-748800 ).networkadapters[0]).ipaddresses[0]
	I0318 10:39:21.462882   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:39:21.463017   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:21.466090   13044 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0318 10:39:21.472211   13044 out.go:177]   - Using image docker.io/busybox:stable
	I0318 10:39:21.474071   13044 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0318 10:39:21.474071   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0318 10:39:21.474071   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:39:21.511729   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:39:21.511729   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:21.511729   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-748800 ).networkadapters[0]).ipaddresses[0]
	I0318 10:39:21.575110   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:39:21.575110   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:21.575110   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-748800 ).networkadapters[0]).ipaddresses[0]
	I0318 10:39:21.715608   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:39:21.715608   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:21.715780   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-748800 ).networkadapters[0]).ipaddresses[0]
	I0318 10:39:21.747255   13044 pod_ready.go:102] pod "coredns-5dd5756b68-kbxll" in "kube-system" namespace has status "Ready":"False"
	I0318 10:39:21.753255   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:39:21.753255   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:21.753255   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-748800 ).networkadapters[0]).ipaddresses[0]
	I0318 10:39:21.909950   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:39:21.909950   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:21.909950   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-748800 ).networkadapters[0]).ipaddresses[0]
	I0318 10:39:21.927278   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:39:21.927278   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:21.927278   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-748800 ).networkadapters[0]).ipaddresses[0]
	I0318 10:39:22.137820   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:39:22.137820   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:22.137820   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-748800 ).networkadapters[0]).ipaddresses[0]
	I0318 10:39:22.238823   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:39:22.238823   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:22.238823   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-748800 ).networkadapters[0]).ipaddresses[0]
	I0318 10:39:22.295392   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:39:22.295392   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:22.295392   13044 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 10:39:22.295392   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 10:39:22.295392   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:39:22.391515   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:39:22.391515   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:22.392507   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-748800 ).networkadapters[0]).ipaddresses[0]
	I0318 10:39:22.985454   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:39:22.985454   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:22.985454   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-748800 ).networkadapters[0]).ipaddresses[0]
	I0318 10:39:22.985901   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:39:22.986030   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:22.986030   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-748800 ).networkadapters[0]).ipaddresses[0]
	I0318 10:39:23.794550   13044 pod_ready.go:102] pod "coredns-5dd5756b68-kbxll" in "kube-system" namespace has status "Ready":"False"
	I0318 10:39:23.870196   13044 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0318 10:39:23.870196   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:39:26.210890   13044 pod_ready.go:102] pod "coredns-5dd5756b68-kbxll" in "kube-system" namespace has status "Ready":"False"
	I0318 10:39:27.808439   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:39:27.808439   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:27.808439   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-748800 ).networkadapters[0]).ipaddresses[0]
	I0318 10:39:28.408728   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:39:28.408728   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:28.408728   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-748800 ).networkadapters[0]).ipaddresses[0]
	I0318 10:39:28.744679   13044 pod_ready.go:102] pod "coredns-5dd5756b68-kbxll" in "kube-system" namespace has status "Ready":"False"
	I0318 10:39:28.775380   13044 main.go:141] libmachine: [stdout =====>] : 172.25.150.46
	
	I0318 10:39:28.775380   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:28.776769   13044 sshutil.go:53] new ssh client: &{IP:172.25.150.46 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-748800\id_rsa Username:docker}
	I0318 10:39:28.868438   13044 main.go:141] libmachine: [stdout =====>] : 172.25.150.46
	
	I0318 10:39:28.868438   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:28.870089   13044 sshutil.go:53] new ssh client: &{IP:172.25.150.46 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-748800\id_rsa Username:docker}
	I0318 10:39:29.035043   13044 main.go:141] libmachine: [stdout =====>] : 172.25.150.46
	
	I0318 10:39:29.035043   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:29.036295   13044 sshutil.go:53] new ssh client: &{IP:172.25.150.46 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-748800\id_rsa Username:docker}
	I0318 10:39:29.181067   13044 main.go:141] libmachine: [stdout =====>] : 172.25.150.46
	
	I0318 10:39:29.181067   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:29.182065   13044 sshutil.go:53] new ssh client: &{IP:172.25.150.46 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-748800\id_rsa Username:docker}
	I0318 10:39:29.274781   13044 main.go:141] libmachine: [stdout =====>] : 172.25.150.46
	
	I0318 10:39:29.275334   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:29.276035   13044 sshutil.go:53] new ssh client: &{IP:172.25.150.46 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-748800\id_rsa Username:docker}
	I0318 10:39:29.283078   13044 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 10:39:29.283078   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0318 10:39:29.314623   13044 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0318 10:39:29.314714   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0318 10:39:29.325622   13044 main.go:141] libmachine: [stdout =====>] : 172.25.150.46
	
	I0318 10:39:29.326608   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:29.326608   13044 sshutil.go:53] new ssh client: &{IP:172.25.150.46 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-748800\id_rsa Username:docker}
	I0318 10:39:29.395461   13044 main.go:141] libmachine: [stdout =====>] : 172.25.150.46
	
	I0318 10:39:29.396368   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:29.397189   13044 sshutil.go:53] new ssh client: &{IP:172.25.150.46 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-748800\id_rsa Username:docker}
	I0318 10:39:29.398213   13044 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0318 10:39:29.398306   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0318 10:39:29.435863   13044 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 10:39:29.435863   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 10:39:29.445367   13044 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0318 10:39:29.445367   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0318 10:39:29.463010   13044 main.go:141] libmachine: [stdout =====>] : 172.25.150.46
	
	I0318 10:39:29.463010   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:29.463685   13044 sshutil.go:53] new ssh client: &{IP:172.25.150.46 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-748800\id_rsa Username:docker}
	I0318 10:39:29.554808   13044 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0318 10:39:29.554941   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0318 10:39:29.567153   13044 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 10:39:29.567153   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 10:39:29.575705   13044 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0318 10:39:29.575786   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0318 10:39:29.617886   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:39:29.617993   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:29.618208   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-748800 ).networkadapters[0]).ipaddresses[0]
	I0318 10:39:29.623339   13044 main.go:141] libmachine: [stdout =====>] : 172.25.150.46
	
	I0318 10:39:29.623339   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:29.623969   13044 sshutil.go:53] new ssh client: &{IP:172.25.150.46 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-748800\id_rsa Username:docker}
	I0318 10:39:29.635986   13044 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0318 10:39:29.635986   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0318 10:39:29.658032   13044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 10:39:29.681982   13044 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0318 10:39:29.681982   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0318 10:39:29.708470   13044 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0318 10:39:29.708470   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0318 10:39:29.741908   13044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 10:39:29.820442   13044 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0318 10:39:29.820442   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0318 10:39:29.874290   13044 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0318 10:39:29.874290   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0318 10:39:29.881025   13044 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0318 10:39:29.881025   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0318 10:39:29.882029   13044 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0318 10:39:29.882029   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0318 10:39:29.895031   13044 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0318 10:39:29.895031   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0318 10:39:29.912046   13044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0318 10:39:29.993159   13044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0318 10:39:30.036831   13044 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0318 10:39:30.036967   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0318 10:39:30.047774   13044 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0318 10:39:30.047880   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0318 10:39:30.127850   13044 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0318 10:39:30.127850   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0318 10:39:30.166260   13044 main.go:141] libmachine: [stdout =====>] : 172.25.150.46
	
	I0318 10:39:30.167302   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:30.167442   13044 sshutil.go:53] new ssh client: &{IP:172.25.150.46 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-748800\id_rsa Username:docker}
	I0318 10:39:30.239182   13044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0318 10:39:30.247842   13044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0318 10:39:30.258886   13044 main.go:141] libmachine: [stdout =====>] : 172.25.150.46
	
	I0318 10:39:30.258886   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:30.258886   13044 sshutil.go:53] new ssh client: &{IP:172.25.150.46 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-748800\id_rsa Username:docker}
	I0318 10:39:30.359619   13044 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0318 10:39:30.359714   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0318 10:39:30.382813   13044 main.go:141] libmachine: [stdout =====>] : 172.25.150.46
	
	I0318 10:39:30.383505   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:30.383808   13044 sshutil.go:53] new ssh client: &{IP:172.25.150.46 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-748800\id_rsa Username:docker}
	I0318 10:39:30.391398   13044 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0318 10:39:30.391487   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0318 10:39:30.482952   13044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0318 10:39:30.633044   13044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0318 10:39:30.635052   13044 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0318 10:39:30.635052   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0318 10:39:30.729990   13044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0318 10:39:30.895917   13044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0318 10:39:30.908720   13044 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0318 10:39:30.908720   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0318 10:39:30.987922   13044 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0318 10:39:30.987922   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0318 10:39:31.158077   13044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0318 10:39:31.211697   13044 pod_ready.go:102] pod "coredns-5dd5756b68-kbxll" in "kube-system" namespace has status "Ready":"False"
	I0318 10:39:31.263120   13044 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0318 10:39:31.263120   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0318 10:39:31.532504   13044 main.go:141] libmachine: [stdout =====>] : 172.25.150.46
	
	I0318 10:39:31.532504   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:31.533867   13044 sshutil.go:53] new ssh client: &{IP:172.25.150.46 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-748800\id_rsa Username:docker}
	I0318 10:39:31.584769   13044 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0318 10:39:31.584769   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0318 10:39:31.634165   13044 main.go:141] libmachine: [stdout =====>] : 172.25.150.46
	
	I0318 10:39:31.634245   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:31.634944   13044 sshutil.go:53] new ssh client: &{IP:172.25.150.46 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-748800\id_rsa Username:docker}
	I0318 10:39:32.005263   13044 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0318 10:39:32.005323   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0318 10:39:32.352710   13044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 10:39:32.443175   13044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0318 10:39:32.452253   13044 main.go:141] libmachine: [stdout =====>] : 172.25.150.46
	
	I0318 10:39:32.452253   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:32.452342   13044 sshutil.go:53] new ssh client: &{IP:172.25.150.46 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-748800\id_rsa Username:docker}
	I0318 10:39:32.505122   13044 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0318 10:39:32.505122   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0318 10:39:32.696581   13044 pod_ready.go:97] pod "coredns-5dd5756b68-kbxll" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-18 10:39:10 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-18 10:39:10 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-18 10:39:10 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-18 10:39:09 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.25.150.46 HostIPs:[] PodIP: PodIPs:[] StartTime:2024-03-18 10:39:10 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerSt
ateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-03-18 10:39:21 +0000 UTC,FinishedAt:2024-03-18 10:39:31 +0000 UTC,ContainerID:docker://581efc15c07dabe7f40a61591709839d051302f35a8967bb0c7caa73f79c5425,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:docker://581efc15c07dabe7f40a61591709839d051302f35a8967bb0c7caa73f79c5425 Started:0xc002946110 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0318 10:39:32.696711   13044 pod_ready.go:81] duration metric: took 13.011706s for pod "coredns-5dd5756b68-kbxll" in "kube-system" namespace to be "Ready" ...
	E0318 10:39:32.696711   13044 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5dd5756b68-kbxll" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-18 10:39:10 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-18 10:39:10 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-18 10:39:10 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-18 10:39:09 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.25.150.46 HostIPs:[] PodIP: PodIPs:[] StartTime:2024-03-18 10:39:10 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Runnin
g:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-03-18 10:39:21 +0000 UTC,FinishedAt:2024-03-18 10:39:31 +0000 UTC,ContainerID:docker://581efc15c07dabe7f40a61591709839d051302f35a8967bb0c7caa73f79c5425,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:docker://581efc15c07dabe7f40a61591709839d051302f35a8967bb0c7caa73f79c5425 Started:0xc002946110 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0318 10:39:32.696764   13044 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-lwtlt" in "kube-system" namespace to be "Ready" ...
	I0318 10:39:32.834051   13044 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0318 10:39:32.834051   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0318 10:39:33.085692   13044 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0318 10:39:33.085845   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0318 10:39:33.114232   13044 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0318 10:39:33.398552   13044 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0318 10:39:33.399602   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0318 10:39:33.517198   13044 addons.go:234] Setting addon gcp-auth=true in "addons-748800"
	I0318 10:39:33.517359   13044 host.go:66] Checking if "addons-748800" exists ...
	I0318 10:39:33.518597   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:39:33.684531   13044 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0318 10:39:33.684600   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0318 10:39:34.008238   13044 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0318 10:39:34.008329   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0318 10:39:34.189325   13044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0318 10:39:34.733857   13044 pod_ready.go:102] pod "coredns-5dd5756b68-lwtlt" in "kube-system" namespace has status "Ready":"False"
	I0318 10:39:35.777772   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:39:35.777772   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:35.791116   13044 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0318 10:39:35.791116   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-748800 ).state
	I0318 10:39:36.789284   13044 pod_ready.go:102] pod "coredns-5dd5756b68-lwtlt" in "kube-system" namespace has status "Ready":"False"
	I0318 10:39:37.350182   13044 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.6918559s)
	I0318 10:39:37.350245   13044 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.6082913s)
	I0318 10:39:37.350373   13044 addons.go:470] Verifying addon metrics-server=true in "addons-748800"
	I0318 10:39:37.350479   13044 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.4382816s)
	I0318 10:39:37.350532   13044 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.3573282s)
	I0318 10:39:37.833829   13044 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.5859407s)
	I0318 10:39:37.833921   13044 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.5945999s)
	I0318 10:39:37.838694   13044 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-748800 service yakd-dashboard -n yakd-dashboard
	
	I0318 10:39:37.834270   13044 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.351273s)
	I0318 10:39:37.838773   13044 addons.go:470] Verifying addon registry=true in "addons-748800"
	I0318 10:39:37.843420   13044 out.go:177] * Verifying registry addon...
	I0318 10:39:37.849407   13044 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0318 10:39:37.899120   13044 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0318 10:39:37.899120   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:38.164221   13044 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.5311303s)
	I0318 10:39:38.261181   13044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:39:38.261181   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:38.261181   13044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-748800 ).networkadapters[0]).ipaddresses[0]
	I0318 10:39:38.470292   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:38.797692   13044 pod_ready.go:102] pod "coredns-5dd5756b68-lwtlt" in "kube-system" namespace has status "Ready":"False"
	I0318 10:39:38.916778   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:39.409388   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:39.912328   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:40.365207   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:40.863565   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:41.171018   13044 main.go:141] libmachine: [stdout =====>] : 172.25.150.46
	
	I0318 10:39:41.171199   13044 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:39:41.171532   13044 sshutil.go:53] new ssh client: &{IP:172.25.150.46 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-748800\id_rsa Username:docker}
	I0318 10:39:41.228008   13044 pod_ready.go:102] pod "coredns-5dd5756b68-lwtlt" in "kube-system" namespace has status "Ready":"False"
	I0318 10:39:41.359108   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:41.870068   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:42.370876   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:42.375458   13044 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (11.4794351s)
	I0318 10:39:42.375610   13044 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (11.2174227s)
	I0318 10:39:42.375610   13044 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.0228383s)
	W0318 10:39:42.375610   13044 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0318 10:39:42.375715   13044 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.9324795s)
	I0318 10:39:42.375769   13044 retry.go:31] will retry after 209.555745ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0318 10:39:42.377826   13044 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (11.6477642s)
	I0318 10:39:42.378230   13044 addons.go:470] Verifying addon ingress=true in "addons-748800"
	I0318 10:39:42.382233   13044 out.go:177] * Verifying ingress addon...
	I0318 10:39:42.386764   13044 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0318 10:39:42.427696   13044 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0318 10:39:42.427696   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0318 10:39:42.450591   13044 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0318 10:39:42.611358   13044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0318 10:39:42.871061   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:42.900760   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:39:43.394418   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:43.405979   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:39:43.940146   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:43.940790   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:39:43.946449   13044 pod_ready.go:102] pod "coredns-5dd5756b68-lwtlt" in "kube-system" namespace has status "Ready":"False"
	I0318 10:39:44.387864   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:44.418211   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:39:44.874971   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:44.900400   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:39:45.507212   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:39:45.512182   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:45.925781   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:39:45.931538   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:45.987707   13044 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (11.7983103s)
	I0318 10:39:45.987707   13044 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (10.1965284s)
	I0318 10:39:45.987707   13044 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-748800"
	I0318 10:39:45.991747   13044 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0318 10:39:45.999732   13044 out.go:177] * Verifying csi-hostpath-driver addon...
	I0318 10:39:46.005709   13044 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0318 10:39:46.005709   13044 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0318 10:39:46.009746   13044 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0318 10:39:46.009746   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0318 10:39:46.060271   13044 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0318 10:39:46.060360   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:39:46.136917   13044 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0318 10:39:46.136917   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0318 10:39:46.166646   13044 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0318 10:39:46.166646   13044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0318 10:39:46.214637   13044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0318 10:39:46.580890   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:39:46.585012   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:39:46.588680   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:46.591284   13044 pod_ready.go:102] pod "coredns-5dd5756b68-lwtlt" in "kube-system" namespace has status "Ready":"False"
	I0318 10:39:46.651557   13044 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.0401098s)
	I0318 10:39:46.868402   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:46.906261   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:39:47.021150   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:39:47.365257   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:47.396110   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:39:47.523994   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:39:47.906724   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:47.918346   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:39:48.073720   13044 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.8590719s)
	I0318 10:39:48.080485   13044 addons.go:470] Verifying addon gcp-auth=true in "addons-748800"
	I0318 10:39:48.085528   13044 out.go:177] * Verifying gcp-auth addon...
	I0318 10:39:48.084485   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:39:48.089487   13044 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0318 10:39:48.191749   13044 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0318 10:39:48.191821   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:39:48.359227   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:48.405926   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:39:48.519746   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:39:48.595275   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:39:48.714618   13044 pod_ready.go:102] pod "coredns-5dd5756b68-lwtlt" in "kube-system" namespace has status "Ready":"False"
	I0318 10:39:48.864027   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:48.896732   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:39:49.025500   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:39:49.098098   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:39:49.375874   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:49.400944   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:39:49.694261   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:39:49.701068   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:39:49.864642   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:49.894659   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:39:50.023306   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:39:50.107124   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:39:50.369456   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:50.400449   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:39:50.516485   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:39:50.606431   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:39:50.718264   13044 pod_ready.go:102] pod "coredns-5dd5756b68-lwtlt" in "kube-system" namespace has status "Ready":"False"
	I0318 10:39:50.861222   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:50.907139   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:39:51.020514   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:39:51.099206   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:39:51.367889   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:51.395470   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:39:51.528928   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:39:51.600588   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:39:51.858612   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:51.902157   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:39:52.016238   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:39:52.107871   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:39:52.365209   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:52.397623   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:39:52.522825   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:39:52.597725   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:39:52.726923   13044 pod_ready.go:102] pod "coredns-5dd5756b68-lwtlt" in "kube-system" namespace has status "Ready":"False"
	I0318 10:39:52.871554   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:52.901141   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:39:53.015287   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:39:53.107583   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:39:53.365917   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:53.396892   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:39:53.526405   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:39:53.601564   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:39:53.859175   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:53.905611   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:39:54.017761   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:39:54.109009   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:39:54.363710   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:54.394862   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:39:54.525303   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:39:54.599485   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:39:54.857580   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:54.902458   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:39:55.016052   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:39:55.112987   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:39:55.221709   13044 pod_ready.go:102] pod "coredns-5dd5756b68-lwtlt" in "kube-system" namespace has status "Ready":"False"
	I0318 10:39:55.367442   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:55.396194   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:39:55.535225   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:39:55.601982   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:39:55.859871   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:55.905079   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:39:56.017942   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:39:56.111058   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:39:56.365634   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:56.395406   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:39:56.523394   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:39:56.596235   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:39:56.874223   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:56.900023   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:39:57.014117   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:39:57.106932   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:39:57.366903   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:57.393141   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:39:57.532718   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:39:57.597693   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:39:57.714165   13044 pod_ready.go:102] pod "coredns-5dd5756b68-lwtlt" in "kube-system" namespace has status "Ready":"False"
	I0318 10:39:58.654951   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:58.655935   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:39:58.665069   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:39:58.666970   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:39:58.674315   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:39:58.675624   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:39:58.676238   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:58.680447   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:39:58.951436   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:39:58.952909   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:59.030138   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:39:59.103414   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:39:59.357329   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:59.405821   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:39:59.519001   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:39:59.611508   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:39:59.865907   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:39:59.895780   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:00.022463   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:00.096672   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:00.363227   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:00.394947   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:00.519662   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:00.599029   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:00.715599   13044 pod_ready.go:102] pod "coredns-5dd5756b68-lwtlt" in "kube-system" namespace has status "Ready":"False"
	I0318 10:40:00.864519   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:00.892853   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:01.022420   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:01.111041   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:01.365783   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:01.394868   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:01.523904   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:01.600413   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:01.879306   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:01.900479   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:02.029422   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:02.104308   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:02.229132   13044 pod_ready.go:92] pod "coredns-5dd5756b68-lwtlt" in "kube-system" namespace has status "Ready":"True"
	I0318 10:40:02.229132   13044 pod_ready.go:81] duration metric: took 29.5321883s for pod "coredns-5dd5756b68-lwtlt" in "kube-system" namespace to be "Ready" ...
	I0318 10:40:02.229132   13044 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-748800" in "kube-system" namespace to be "Ready" ...
	I0318 10:40:02.238923   13044 pod_ready.go:92] pod "etcd-addons-748800" in "kube-system" namespace has status "Ready":"True"
	I0318 10:40:02.238987   13044 pod_ready.go:81] duration metric: took 9.8552ms for pod "etcd-addons-748800" in "kube-system" namespace to be "Ready" ...
	I0318 10:40:02.238987   13044 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-748800" in "kube-system" namespace to be "Ready" ...
	I0318 10:40:02.248745   13044 pod_ready.go:92] pod "kube-apiserver-addons-748800" in "kube-system" namespace has status "Ready":"True"
	I0318 10:40:02.248745   13044 pod_ready.go:81] duration metric: took 9.7572ms for pod "kube-apiserver-addons-748800" in "kube-system" namespace to be "Ready" ...
	I0318 10:40:02.248745   13044 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-748800" in "kube-system" namespace to be "Ready" ...
	I0318 10:40:02.255875   13044 pod_ready.go:92] pod "kube-controller-manager-addons-748800" in "kube-system" namespace has status "Ready":"True"
	I0318 10:40:02.256065   13044 pod_ready.go:81] duration metric: took 7.3206ms for pod "kube-controller-manager-addons-748800" in "kube-system" namespace to be "Ready" ...
	I0318 10:40:02.256065   13044 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9cxt9" in "kube-system" namespace to be "Ready" ...
	I0318 10:40:02.262782   13044 pod_ready.go:92] pod "kube-proxy-9cxt9" in "kube-system" namespace has status "Ready":"True"
	I0318 10:40:02.262782   13044 pod_ready.go:81] duration metric: took 6.717ms for pod "kube-proxy-9cxt9" in "kube-system" namespace to be "Ready" ...
	I0318 10:40:02.262782   13044 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-748800" in "kube-system" namespace to be "Ready" ...
	I0318 10:40:02.357913   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:02.404731   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:02.515708   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:02.611661   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:02.619441   13044 pod_ready.go:92] pod "kube-scheduler-addons-748800" in "kube-system" namespace has status "Ready":"True"
	I0318 10:40:02.619550   13044 pod_ready.go:81] duration metric: took 356.765ms for pod "kube-scheduler-addons-748800" in "kube-system" namespace to be "Ready" ...
	I0318 10:40:02.619550   13044 pod_ready.go:38] duration metric: took 43.3113569s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 10:40:02.619550   13044 api_server.go:52] waiting for apiserver process to appear ...
	I0318 10:40:02.637717   13044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 10:40:02.698545   13044 api_server.go:72] duration metric: took 54.1259282s to wait for apiserver process to appear ...
	I0318 10:40:02.698613   13044 api_server.go:88] waiting for apiserver healthz status ...
	I0318 10:40:02.698613   13044 api_server.go:253] Checking apiserver healthz at https://172.25.150.46:8443/healthz ...
	I0318 10:40:02.714017   13044 api_server.go:279] https://172.25.150.46:8443/healthz returned 200:
	ok
	I0318 10:40:02.717685   13044 api_server.go:141] control plane version: v1.28.4
	I0318 10:40:02.717685   13044 api_server.go:131] duration metric: took 19.0714ms to wait for apiserver health ...
	I0318 10:40:02.717685   13044 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 10:40:02.843664   13044 system_pods.go:59] 18 kube-system pods found
	I0318 10:40:02.843664   13044 system_pods.go:61] "coredns-5dd5756b68-lwtlt" [d6d447ed-6e58-4adb-ad16-d89c7d5d5604] Running
	I0318 10:40:02.843664   13044 system_pods.go:61] "csi-hostpath-attacher-0" [a0430ddc-65f8-4c5d-a871-5fa4d74e398c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0318 10:40:02.843664   13044 system_pods.go:61] "csi-hostpath-resizer-0" [33c5eb80-dc46-473c-85cf-da6b628049ee] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0318 10:40:02.843664   13044 system_pods.go:61] "csi-hostpathplugin-jlbnm" [dd08fd8e-a9dc-4966-9b39-d58c505e12fa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0318 10:40:02.843664   13044 system_pods.go:61] "etcd-addons-748800" [cff45b0c-c3e0-4c11-8a1a-1d7fe8c391e4] Running
	I0318 10:40:02.843664   13044 system_pods.go:61] "kube-apiserver-addons-748800" [45b34f65-a671-4c97-8cf7-fd4e69c08266] Running
	I0318 10:40:02.843664   13044 system_pods.go:61] "kube-controller-manager-addons-748800" [27a7fae9-614f-4363-b1eb-7cdb27b7fe73] Running
	I0318 10:40:02.843664   13044 system_pods.go:61] "kube-ingress-dns-minikube" [fa2458ae-1e81-45d5-b0bf-7b62d1240533] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0318 10:40:02.843664   13044 system_pods.go:61] "kube-proxy-9cxt9" [8cbdc70a-8927-4a72-bc09-0fc63aff1757] Running
	I0318 10:40:02.843664   13044 system_pods.go:61] "kube-scheduler-addons-748800" [15e2b716-fef6-421b-8aac-d0bd810b43f8] Running
	I0318 10:40:02.843664   13044 system_pods.go:61] "metrics-server-69cf46c98-z7ml8" [9ca72841-3a3f-4495-be4a-eaa6cfc05271] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 10:40:02.843664   13044 system_pods.go:61] "nvidia-device-plugin-daemonset-7gh64" [13df6930-c649-4c69-899b-ead23cdccba1] Running
	I0318 10:40:02.843664   13044 system_pods.go:61] "registry-plfjv" [a64e15ae-07d9-44a1-9c6c-b119905c56b8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0318 10:40:02.843664   13044 system_pods.go:61] "registry-proxy-htczk" [fb4c18c7-fe5e-4fbf-ad1c-582c178d397e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0318 10:40:02.843664   13044 system_pods.go:61] "snapshot-controller-58dbcc7b99-kckn7" [64fdf74b-1a74-46ea-a585-db2a2668c721] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0318 10:40:02.843664   13044 system_pods.go:61] "snapshot-controller-58dbcc7b99-v2sjs" [adc471fb-eb18-4e9b-a5a1-108dfcf05383] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0318 10:40:02.843664   13044 system_pods.go:61] "storage-provisioner" [91f9981c-7ba8-4636-baaa-08b3fe8976b3] Running
	I0318 10:40:02.843664   13044 system_pods.go:61] "tiller-deploy-7b677967b9-f52g9" [687334c7-33e3-453f-8f41-bad41c523ac2] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0318 10:40:02.843664   13044 system_pods.go:74] duration metric: took 125.9786ms to wait for pod list to return data ...
	I0318 10:40:02.843664   13044 default_sa.go:34] waiting for default service account to be created ...
	I0318 10:40:02.863404   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:02.893089   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:03.033132   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:03.035790   13044 default_sa.go:45] found service account: "default"
	I0318 10:40:03.035871   13044 default_sa.go:55] duration metric: took 192.2052ms for default service account to be created ...
	I0318 10:40:03.035871   13044 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 10:40:03.101142   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:03.244942   13044 system_pods.go:86] 18 kube-system pods found
	I0318 10:40:03.244942   13044 system_pods.go:89] "coredns-5dd5756b68-lwtlt" [d6d447ed-6e58-4adb-ad16-d89c7d5d5604] Running
	I0318 10:40:03.244942   13044 system_pods.go:89] "csi-hostpath-attacher-0" [a0430ddc-65f8-4c5d-a871-5fa4d74e398c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0318 10:40:03.244942   13044 system_pods.go:89] "csi-hostpath-resizer-0" [33c5eb80-dc46-473c-85cf-da6b628049ee] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0318 10:40:03.244942   13044 system_pods.go:89] "csi-hostpathplugin-jlbnm" [dd08fd8e-a9dc-4966-9b39-d58c505e12fa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0318 10:40:03.244942   13044 system_pods.go:89] "etcd-addons-748800" [cff45b0c-c3e0-4c11-8a1a-1d7fe8c391e4] Running
	I0318 10:40:03.244942   13044 system_pods.go:89] "kube-apiserver-addons-748800" [45b34f65-a671-4c97-8cf7-fd4e69c08266] Running
	I0318 10:40:03.244942   13044 system_pods.go:89] "kube-controller-manager-addons-748800" [27a7fae9-614f-4363-b1eb-7cdb27b7fe73] Running
	I0318 10:40:03.244942   13044 system_pods.go:89] "kube-ingress-dns-minikube" [fa2458ae-1e81-45d5-b0bf-7b62d1240533] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0318 10:40:03.244942   13044 system_pods.go:89] "kube-proxy-9cxt9" [8cbdc70a-8927-4a72-bc09-0fc63aff1757] Running
	I0318 10:40:03.244942   13044 system_pods.go:89] "kube-scheduler-addons-748800" [15e2b716-fef6-421b-8aac-d0bd810b43f8] Running
	I0318 10:40:03.244942   13044 system_pods.go:89] "metrics-server-69cf46c98-z7ml8" [9ca72841-3a3f-4495-be4a-eaa6cfc05271] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 10:40:03.244942   13044 system_pods.go:89] "nvidia-device-plugin-daemonset-7gh64" [13df6930-c649-4c69-899b-ead23cdccba1] Running
	I0318 10:40:03.244942   13044 system_pods.go:89] "registry-plfjv" [a64e15ae-07d9-44a1-9c6c-b119905c56b8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0318 10:40:03.244942   13044 system_pods.go:89] "registry-proxy-htczk" [fb4c18c7-fe5e-4fbf-ad1c-582c178d397e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0318 10:40:03.244942   13044 system_pods.go:89] "snapshot-controller-58dbcc7b99-kckn7" [64fdf74b-1a74-46ea-a585-db2a2668c721] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0318 10:40:03.244942   13044 system_pods.go:89] "snapshot-controller-58dbcc7b99-v2sjs" [adc471fb-eb18-4e9b-a5a1-108dfcf05383] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0318 10:40:03.244942   13044 system_pods.go:89] "storage-provisioner" [91f9981c-7ba8-4636-baaa-08b3fe8976b3] Running
	I0318 10:40:03.244942   13044 system_pods.go:89] "tiller-deploy-7b677967b9-f52g9" [687334c7-33e3-453f-8f41-bad41c523ac2] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0318 10:40:03.244942   13044 system_pods.go:126] duration metric: took 209.0703ms to wait for k8s-apps to be running ...
	I0318 10:40:03.244942   13044 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 10:40:03.256648   13044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 10:40:03.282620   13044 system_svc.go:56] duration metric: took 37.6777ms WaitForService to wait for kubelet
	I0318 10:40:03.282620   13044 kubeadm.go:576] duration metric: took 54.7099995s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 10:40:03.282620   13044 node_conditions.go:102] verifying NodePressure condition ...
	I0318 10:40:03.357427   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:03.402765   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:03.416728   13044 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 10:40:03.417002   13044 node_conditions.go:123] node cpu capacity is 2
	I0318 10:40:03.417055   13044 node_conditions.go:105] duration metric: took 134.4339ms to run NodePressure ...
	I0318 10:40:03.417055   13044 start.go:240] waiting for startup goroutines ...
	I0318 10:40:03.518068   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:03.609225   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:03.866356   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:03.895877   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:04.032272   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:04.100877   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:04.371346   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:04.400698   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:04.516847   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:04.607620   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:04.863306   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:04.908284   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:05.021797   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:05.100809   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:05.366665   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:05.397227   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:05.534330   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:05.608142   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:05.870181   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:05.898897   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:06.013414   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:06.107032   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:06.365053   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:06.396085   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:06.527565   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:06.601142   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:06.856894   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:06.904908   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:07.017506   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:07.111085   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:07.362908   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:07.393345   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:07.524397   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:07.599838   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:07.868069   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:07.904138   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:08.029282   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:08.103750   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:08.646777   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:08.647342   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:08.647657   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:08.653095   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:08.859267   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:08.909194   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:09.294417   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:09.300659   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:09.706423   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:09.708485   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:09.710049   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:09.714439   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:09.871679   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:09.895086   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:10.024126   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:10.107720   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:10.371823   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:10.400504   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:10.528525   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:10.607125   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:10.864375   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:10.907255   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:11.019968   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:11.105675   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:11.367297   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:11.397281   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:11.528622   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:11.604201   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:11.861526   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:11.908024   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:12.018322   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:12.100959   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:12.365007   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:12.394536   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:12.524720   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:12.598196   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:12.861059   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:12.905242   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:13.017229   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:13.110432   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:13.366766   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:13.394403   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:13.897818   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:13.903926   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:13.904940   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:13.905471   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:14.046900   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:14.510122   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:14.510122   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:14.514434   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:14.516538   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:14.595981   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:14.867035   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:14.898900   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:15.038201   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:15.108942   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:15.396588   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:15.427875   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:15.624191   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:15.624999   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:15.870268   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:15.901651   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:16.028651   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:16.102699   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:16.359771   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:16.403849   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:16.520798   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:16.610358   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:16.864087   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:16.892214   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:17.021498   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:17.096606   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:17.368394   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:17.397949   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:17.525864   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:17.604287   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:17.861181   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:17.905704   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:18.020323   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:18.098365   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:18.366486   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:18.396433   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:18.526458   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:18.604090   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:18.858205   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:18.906954   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:19.021259   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:19.101405   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:19.367107   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:19.396931   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:19.525596   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:19.601930   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:19.860557   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:19.904114   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:20.018439   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:20.108340   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:20.364154   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:20.393729   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:20.523849   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:20.601323   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:20.856826   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:20.904193   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:21.017588   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:21.109237   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:21.366668   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:21.397958   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:21.527252   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:21.602196   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:21.860913   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:21.907502   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:22.019128   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:22.099639   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:22.365587   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:22.394601   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:22.528819   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:22.604145   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:22.858174   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:22.905256   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:23.018406   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:23.109821   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:23.368799   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:23.399952   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:23.528569   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:23.604080   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:23.859088   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:23.906017   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:24.019021   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:24.108588   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:24.368286   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:24.397413   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:24.525251   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:24.599428   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:24.870333   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:24.900390   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:25.029946   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:25.104537   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:25.363330   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:25.406375   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:25.519748   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:25.596548   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:25.867690   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:25.897792   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:26.027769   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:26.102107   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:26.358730   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:26.402663   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:26.517574   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:26.608436   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:26.864510   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:26.894962   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:27.025275   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:27.100989   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:27.370055   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:27.400877   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:27.515160   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:27.606856   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:27.863066   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:27.907348   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:28.022430   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:28.098531   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:28.369877   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:28.400177   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:28.528803   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:28.604028   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:28.860438   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:28.893057   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:29.025780   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:29.099329   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:29.373106   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:29.402023   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:29.514643   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:29.603737   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:29.869113   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:29.898185   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:30.043822   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:30.106091   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:30.746036   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:30.750847   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:30.750847   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:30.750847   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:30.859620   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:30.905490   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:31.020138   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:31.098804   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:31.366094   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:31.396331   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:31.524741   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:31.601055   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:31.858392   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:31.903734   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:32.015159   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:32.108489   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:32.374629   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:32.395576   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:32.524022   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:32.600611   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:32.859368   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:32.907401   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:33.016276   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:33.107282   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:33.365534   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:33.395432   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:33.524294   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:33.600912   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:33.857355   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:33.903797   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:34.016883   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:34.110193   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:34.365634   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:34.395928   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:34.526090   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:34.600943   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:35.113047   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:35.113594   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:35.246799   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:35.254113   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:35.991909   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:35.994249   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:35.994249   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:35.995888   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:35.998926   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:36.003231   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:36.026200   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:36.103562   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:36.749798   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:36.749976   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:36.752718   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:36.758800   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:36.857748   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:36.908246   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:37.020653   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:37.113071   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:37.367202   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:37.396951   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:37.531086   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:37.603653   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:37.867561   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:37.909865   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:38.021790   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:38.104678   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:38.371747   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:38.405119   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:38.529681   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:38.604788   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:38.869003   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:38.903465   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:39.017384   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:39.110851   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:39.362753   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:39.409433   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:39.520257   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:39.595520   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:39.875211   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:39.902222   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:40.027689   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:40.107347   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:40.359133   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:40.405376   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:40.519375   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:40.595466   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:41.141894   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:41.146339   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:41.146785   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:41.148783   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:41.372029   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:41.402212   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:41.516582   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:41.608322   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:41.865090   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:41.903680   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:42.032690   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:42.103451   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:42.375828   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:42.398664   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:42.540194   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:42.604715   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:42.865963   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:42.909606   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:43.030115   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:43.111075   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:43.369423   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:43.401468   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:43.532396   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:43.616797   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:43.859087   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:43.911241   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:44.019012   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:44.129155   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:44.362424   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:44.411060   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:44.529770   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:44.609542   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:44.865841   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:44.896350   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:45.026616   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:45.101014   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:45.374713   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:45.404668   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:45.517501   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:45.608379   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:45.894594   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:45.896324   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:46.021472   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:46.099551   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:46.368421   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:46.396991   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:46.528409   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:46.604191   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:46.862430   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:46.907583   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:47.020502   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:47.110192   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:47.367620   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:47.398314   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:47.798599   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:47.802850   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:48.299201   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:48.299201   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:48.300181   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:48.304181   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:48.359977   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:48.405568   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:48.517224   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:48.606912   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:48.862010   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:48.907456   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:49.023668   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:49.099182   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:49.370706   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:49.401675   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:49.517084   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:49.608578   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:49.866318   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:49.896557   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:50.025370   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:50.100959   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:50.358530   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:50.403403   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:50.515011   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:50.608012   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:50.863868   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:50.894132   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:51.023185   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:51.100757   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:51.357152   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:51.402171   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:51.514723   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:51.607044   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:51.863575   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:52.054601   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:52.058782   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:52.661489   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:52.662145   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:52.662145   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:52.664128   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:52.696120   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:52.956856   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:52.959289   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:53.029957   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:53.110027   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:53.359180   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:53.406391   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:53.516458   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:53.605058   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:54.249672   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:54.253913   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:54.255177   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:54.259936   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:54.403889   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:54.405332   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:54.524527   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:54.597976   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:54.867713   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:54.900095   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:55.043691   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:55.104972   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:55.364450   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:55.404684   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:55.522299   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:55.595377   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:55.866471   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:55.897533   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:56.025320   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:56.100487   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:56.373737   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:56.403365   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:56.514386   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:56.607991   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:56.864962   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:56.892693   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:57.020994   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:57.100459   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:57.369448   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:57.401826   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:57.513637   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:57.606308   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:57.873642   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:57.903026   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:58.016142   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:58.108074   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:58.619317   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:58.634328   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:58.635909   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:58.639724   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:58.864495   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:58.906808   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:59.069085   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:59.120092   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:59.369659   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:59.400313   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:40:59.525955   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:40:59.601511   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:40:59.871704   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:40:59.912120   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:00.029838   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:00.107628   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:00.363257   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:41:00.394684   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:00.527093   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:00.602539   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:00.871686   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:41:00.899429   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:01.029539   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:01.105760   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:01.361145   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:41:01.407739   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:01.521958   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:01.597535   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:01.871337   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:41:01.903241   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:02.031741   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:02.107426   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:02.359800   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:41:02.405518   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:02.520237   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:02.595906   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:02.868350   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:41:02.898066   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:03.040748   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:03.119427   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:03.358228   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:41:03.406766   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:03.516613   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:03.608382   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:03.864696   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:41:03.894901   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:04.025819   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:04.098354   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:04.370810   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:41:04.399952   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:04.532771   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:04.607530   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:04.864189   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:41:04.912855   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:05.023600   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:05.097491   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:05.371587   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:41:05.399019   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:05.513771   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:05.604949   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:05.861099   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:41:05.907628   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:06.022717   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:06.098013   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:06.370865   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:41:06.449970   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:06.528959   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:06.602213   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:07.158245   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:07.158245   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:41:07.159859   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:07.163307   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:07.363547   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:41:07.409256   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:07.522140   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:07.610425   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:07.863303   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:41:07.909862   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:08.020557   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:08.097979   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:08.365028   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:41:08.395424   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:08.520367   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:08.596962   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:08.867124   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:41:08.900641   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:09.032485   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:09.102901   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:09.396469   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:41:09.409909   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:09.523477   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:09.616757   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:09.870128   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:41:09.900720   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:10.032181   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:10.114148   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:10.358928   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:41:10.404065   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:10.517272   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:10.608021   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:10.863010   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:41:10.895507   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:11.021816   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:11.097627   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:11.369580   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:41:11.398034   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:11.525529   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:11.600973   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:11.859675   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:41:11.906191   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:12.019531   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:12.109822   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:12.366388   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:41:12.395728   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:12.522245   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:12.599349   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:12.870946   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:41:12.899698   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:13.030305   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:13.102789   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:13.357662   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:41:13.402379   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:13.516493   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:13.606321   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:13.861943   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:41:13.907965   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:14.021739   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:14.100402   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:14.369581   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:41:14.399087   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:14.526549   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:14.603515   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:14.858505   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:41:14.905568   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:15.015318   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:15.109366   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:15.361926   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:41:15.406658   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:15.519728   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:15.608911   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:15.869556   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:41:15.897931   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:16.013357   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:16.103745   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:16.360633   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:41:16.406123   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:16.520136   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:16.600014   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:16.868520   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:41:16.898140   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:17.028189   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:17.103972   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:17.371546   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:41:17.400482   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:17.530067   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:17.606079   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:17.862088   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:41:17.907208   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:18.021653   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:18.100068   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:18.368501   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:41:18.397983   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:18.527101   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:18.601429   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:18.871762   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:41:18.901782   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:19.030610   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:19.102665   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:19.704641   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:19.704974   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:19.706566   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:41:19.710725   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:20.145344   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:20.148980   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:20.150520   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:41:20.154730   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:20.445993   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:41:20.446675   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:21.062019   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:21.067750   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:21.070405   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:41:21.075416   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:21.099127   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:21.107707   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:21.372133   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 10:41:21.400167   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:21.525283   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:21.597232   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:21.865638   13044 kapi.go:107] duration metric: took 1m44.0155889s to wait for kubernetes.io/minikube-addons=registry ...
	I0318 10:41:21.897599   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:22.027754   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:22.100732   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:22.405178   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:22.515348   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:22.608679   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:22.897025   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:23.026279   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:23.101954   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:23.405880   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:23.517481   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:23.609709   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:23.896176   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:24.024918   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:24.099412   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:24.400359   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:24.529211   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:24.603865   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:24.905810   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:25.015143   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:25.104980   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:25.410149   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:25.516526   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:25.610569   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:25.895393   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:26.026560   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:26.101259   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:26.407696   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:26.527406   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:26.602910   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:26.907672   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:27.024683   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:27.101141   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:27.400679   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:27.528411   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:27.604609   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:27.904381   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:28.015524   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:28.157665   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:28.402474   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:28.523591   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:28.599187   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:28.900479   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:29.026866   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:29.109561   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:29.409216   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:29.526832   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:29.597745   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:29.904489   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:30.016337   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:30.108725   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:30.399945   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:30.533522   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:30.604376   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:30.897158   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:31.026994   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:31.105361   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:31.402486   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:31.515784   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:31.606806   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:31.911054   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:32.062815   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:32.105193   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:32.401721   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:32.517154   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:32.607426   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:33.402011   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:33.405258   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:33.408863   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:33.420204   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:33.534794   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:33.603071   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:33.929747   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:34.020127   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:34.108914   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:34.410004   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:34.525748   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:34.609179   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:34.907598   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:35.022537   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:35.098362   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:35.401038   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:35.534639   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:35.606300   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:35.894344   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:36.106728   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:36.113119   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:36.401636   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:36.518621   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:36.640948   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:36.907588   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:37.021670   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:37.100848   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:37.401593   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:37.532357   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:37.613089   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:37.901456   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:38.028379   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:38.102957   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:38.408764   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:38.521745   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:38.599107   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:38.904303   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:39.023758   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:39.100408   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:39.403008   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:39.906919   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:40.008601   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:40.008671   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:40.018910   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:40.115161   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:40.395443   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:40.527570   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:40.601464   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:40.907765   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:41.026394   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:41.102386   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:41.404024   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:41.517534   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:41.609085   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:41.895702   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:42.023629   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:42.098544   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:42.403819   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:42.517809   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:42.610021   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:42.897081   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:43.025064   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:43.099156   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:43.400189   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:43.524218   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:43.600230   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:43.906729   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:44.021046   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:44.107923   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:44.397651   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:44.525887   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:44.603646   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:44.903252   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:45.029661   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:45.134418   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:45.406878   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:45.520916   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:45.610670   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:45.898747   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:46.030785   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:46.101728   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:46.403461   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:46.517102   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:46.607858   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:46.899087   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:47.026161   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:47.102487   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:47.407757   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:47.572689   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:47.606129   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:47.909681   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:48.017067   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:48.123191   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:48.401393   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:48.522528   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:48.596980   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:48.896480   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:49.025967   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:49.099388   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:49.404415   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:49.515775   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:49.606784   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:49.939881   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:50.021019   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:50.098149   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:50.395635   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:50.524985   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:50.600490   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:50.901731   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:51.156370   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:51.160942   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:51.419350   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:51.521950   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:51.607870   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:51.903014   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:52.032986   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:52.113498   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:52.402019   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:52.526587   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:52.600349   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:52.902062   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:53.017074   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:53.107392   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:53.397554   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:53.524855   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:53.601582   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:53.897501   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:54.025403   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:54.101921   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:54.399708   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:54.527754   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:54.604462   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:54.904876   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:55.018196   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:55.110779   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:55.397844   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:55.527305   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:55.611699   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:55.895171   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:56.022611   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:56.474050   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:56.475379   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:56.725300   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:56.730125   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:56.904835   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:57.291527   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:57.293920   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:57.460486   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:57.526830   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:57.604340   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:57.915212   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:58.028723   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:58.106410   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:58.404118   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:58.522941   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:58.610065   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:58.909210   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:59.026871   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:59.102117   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:59.401457   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:41:59.529068   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:41:59.603814   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:41:59.909042   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:42:00.020640   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:42:00.109982   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:42:00.396620   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:42:00.523400   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:42:00.597515   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:42:00.901496   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:42:01.028092   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:42:01.103239   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:42:01.409762   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:42:01.517040   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:42:01.616235   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:42:01.911953   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:42:02.019200   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 10:42:02.112074   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:42:02.401825   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:42:02.538199   13044 kapi.go:107] duration metric: took 2m16.5316461s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0318 10:42:02.604850   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:42:02.906022   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:42:03.113379   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:42:03.399300   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:42:03.601753   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:42:03.905882   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:42:04.114062   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:42:04.397861   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:42:04.603271   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:42:04.954216   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:42:05.118031   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:42:05.398872   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:42:05.604025   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:42:05.904210   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:42:06.109199   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:42:06.405004   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:42:06.897760   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:42:06.899525   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:42:07.109666   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:42:07.398813   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:42:07.605313   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:42:07.905400   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:42:08.108218   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:42:08.396681   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:42:08.606175   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:42:08.894557   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:42:09.101713   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:42:09.393605   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:42:09.607936   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:42:09.896665   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:42:10.102850   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:42:10.412383   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:42:10.596654   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:42:10.902059   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:42:11.107162   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:42:11.409551   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:42:11.610919   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:42:11.900696   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:42:12.105956   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:42:12.408272   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:42:12.598111   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:42:12.902139   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:42:13.105510   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:42:13.407596   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:42:13.599834   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:42:13.904349   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:42:14.110104   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:42:14.500042   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:42:14.997038   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:42:14.997953   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:42:15.107155   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:42:15.899659   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:42:15.901192   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:42:15.907145   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:42:16.101728   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:42:16.397532   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:42:16.600029   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:42:16.908918   13044 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 10:42:17.117822   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:42:17.403010   13044 kapi.go:107] duration metric: took 2m35.015391s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0318 10:42:17.601372   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:42:18.108355   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:42:18.618875   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:42:19.103356   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:42:19.600203   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:42:20.197133   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:42:20.608016   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:42:21.100699   13044 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 10:42:21.602379   13044 kapi.go:107] duration metric: took 2m33.5118831s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0318 10:42:21.605338   13044 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-748800 cluster.
	I0318 10:42:21.607882   13044 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0318 10:42:21.627142   13044 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0318 10:42:21.635929   13044 out.go:177] * Enabled addons: storage-provisioner, metrics-server, ingress-dns, helm-tiller, cloud-spanner, yakd, inspektor-gadget, nvidia-device-plugin, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0318 10:42:21.638372   13044 addons.go:505] duration metric: took 3m13.0648942s for enable addons: enabled=[storage-provisioner metrics-server ingress-dns helm-tiller cloud-spanner yakd inspektor-gadget nvidia-device-plugin storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0318 10:42:21.638462   13044 start.go:245] waiting for cluster config update ...
	I0318 10:42:21.638564   13044 start.go:254] writing updated cluster config ...
	I0318 10:42:21.652293   13044 ssh_runner.go:195] Run: rm -f paused
	I0318 10:42:21.871097   13044 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 10:42:21.874226   13044 out.go:177] * Done! kubectl is now configured to use "addons-748800" cluster and "default" namespace by default
	
	
	==> Docker <==
	Mar 18 10:43:09 addons-748800 dockerd[1336]: time="2024-03-18T10:43:09.393698419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 10:43:09 addons-748800 dockerd[1336]: time="2024-03-18T10:43:09.393926320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 10:43:12 addons-748800 dockerd[1336]: time="2024-03-18T10:43:12.835962411Z" level=info msg="shim disconnected" id=cf8560b9aaa228a0bb5e13bccb0ae3d5af16c73c1486ce0f69ccda4ead8c5d30 namespace=moby
	Mar 18 10:43:12 addons-748800 dockerd[1336]: time="2024-03-18T10:43:12.836738212Z" level=warning msg="cleaning up after shim disconnected" id=cf8560b9aaa228a0bb5e13bccb0ae3d5af16c73c1486ce0f69ccda4ead8c5d30 namespace=moby
	Mar 18 10:43:12 addons-748800 dockerd[1336]: time="2024-03-18T10:43:12.836815912Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 10:43:12 addons-748800 dockerd[1330]: time="2024-03-18T10:43:12.837644113Z" level=info msg="ignoring event" container=cf8560b9aaa228a0bb5e13bccb0ae3d5af16c73c1486ce0f69ccda4ead8c5d30 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 10:43:13 addons-748800 cri-dockerd[1223]: time="2024-03-18T10:43:13Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"metrics-server-69cf46c98-z7ml8_kube-system\": unexpected command output nsenter: cannot open /proc/4563/ns/net: No such file or directory\n with error: exit status 1"
	Mar 18 10:43:13 addons-748800 dockerd[1330]: time="2024-03-18T10:43:13.050355144Z" level=info msg="ignoring event" container=ced0aaa19eb6482798bdeb038993f442887a233004760d240f4a0abfa33ee148 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 10:43:13 addons-748800 dockerd[1336]: time="2024-03-18T10:43:13.050802548Z" level=info msg="shim disconnected" id=ced0aaa19eb6482798bdeb038993f442887a233004760d240f4a0abfa33ee148 namespace=moby
	Mar 18 10:43:13 addons-748800 dockerd[1336]: time="2024-03-18T10:43:13.050929849Z" level=warning msg="cleaning up after shim disconnected" id=ced0aaa19eb6482798bdeb038993f442887a233004760d240f4a0abfa33ee148 namespace=moby
	Mar 18 10:43:13 addons-748800 dockerd[1336]: time="2024-03-18T10:43:13.050948949Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 10:43:16 addons-748800 dockerd[1330]: time="2024-03-18T10:43:16.162285849Z" level=info msg="ignoring event" container=c164f57220e8b94279f33733e614ed77c62e6fe3215cae8130c90464e43bf8b0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 10:43:16 addons-748800 dockerd[1336]: time="2024-03-18T10:43:16.162805453Z" level=info msg="shim disconnected" id=c164f57220e8b94279f33733e614ed77c62e6fe3215cae8130c90464e43bf8b0 namespace=moby
	Mar 18 10:43:16 addons-748800 dockerd[1336]: time="2024-03-18T10:43:16.162883653Z" level=warning msg="cleaning up after shim disconnected" id=c164f57220e8b94279f33733e614ed77c62e6fe3215cae8130c90464e43bf8b0 namespace=moby
	Mar 18 10:43:16 addons-748800 dockerd[1336]: time="2024-03-18T10:43:16.162897353Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 10:43:16 addons-748800 dockerd[1336]: time="2024-03-18T10:43:16.198694436Z" level=warning msg="cleanup warnings time=\"2024-03-18T10:43:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Mar 18 10:43:16 addons-748800 dockerd[1330]: time="2024-03-18T10:43:16.355622277Z" level=info msg="ignoring event" container=f44bbf54be59c573c67091ca30886641a28556641718a10e4df8311869c8b87a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 10:43:16 addons-748800 dockerd[1336]: time="2024-03-18T10:43:16.356313982Z" level=info msg="shim disconnected" id=f44bbf54be59c573c67091ca30886641a28556641718a10e4df8311869c8b87a namespace=moby
	Mar 18 10:43:16 addons-748800 dockerd[1336]: time="2024-03-18T10:43:16.356598585Z" level=warning msg="cleaning up after shim disconnected" id=f44bbf54be59c573c67091ca30886641a28556641718a10e4df8311869c8b87a namespace=moby
	Mar 18 10:43:16 addons-748800 dockerd[1336]: time="2024-03-18T10:43:16.356889687Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 10:43:17 addons-748800 dockerd[1336]: time="2024-03-18T10:43:17.929115078Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 10:43:17 addons-748800 dockerd[1336]: time="2024-03-18T10:43:17.929296678Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 10:43:17 addons-748800 dockerd[1336]: time="2024-03-18T10:43:17.929320478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 10:43:17 addons-748800 dockerd[1336]: time="2024-03-18T10:43:17.929636378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 10:43:18 addons-748800 cri-dockerd[1223]: time="2024-03-18T10:43:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2010bbcf2e7f0498e0b6d8bcb730e5a6c2566109620d43b84299f7528106861f/resolv.conf as [nameserver 10.96.0.10 search kube-system.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	0ea58bab443fd       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c6db0381372939368364efb98d8c90f4b0e3d86a5637682b85a01195937d9eff                            29 seconds ago       Exited              gadget                                   3                   2ecf9c923ac39       gadget-w4zxv
	b2e0e12bc2487       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 About a minute ago   Running             gcp-auth                                 0                   12075854678ac       gcp-auth-7d69788767-vn82r
	5ef79efddf032       registry.k8s.io/ingress-nginx/controller@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c                             About a minute ago   Running             controller                               0                   f8e4c88bda5c2       ingress-nginx-controller-76dc478dd8-v9dls
	f71374b0c3ec2       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          About a minute ago   Running             csi-snapshotter                          0                   5d4dac9e400f8       csi-hostpathplugin-jlbnm
	4fd874c178342       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          About a minute ago   Running             csi-provisioner                          0                   5d4dac9e400f8       csi-hostpathplugin-jlbnm
	25784734ba727       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            About a minute ago   Running             liveness-probe                           0                   5d4dac9e400f8       csi-hostpathplugin-jlbnm
	7291f5462c870       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           About a minute ago   Running             hostpath                                 0                   5d4dac9e400f8       csi-hostpathplugin-jlbnm
	cf560071e93e9       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                About a minute ago   Running             node-driver-registrar                    0                   5d4dac9e400f8       csi-hostpathplugin-jlbnm
	66f732e91b46a       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              About a minute ago   Running             csi-resizer                              0                   fdbf4062b4699       csi-hostpath-resizer-0
	8f35fce26f304       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   About a minute ago   Running             csi-external-health-monitor-controller   0                   5d4dac9e400f8       csi-hostpathplugin-jlbnm
	95bfc232fa702       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             About a minute ago   Running             csi-attacher                             0                   cd3b77c00613d       csi-hostpath-attacher-0
	901805b26689e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:44d1d0e9f19c63f58b380c5fddaca7cf22c7cee564adeff365225a5df5ef3334                   About a minute ago   Exited              patch                                    0                   479f024ee4538       ingress-nginx-admission-patch-w89zm
	0c82bb4d00cbe       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:44d1d0e9f19c63f58b380c5fddaca7cf22c7cee564adeff365225a5df5ef3334                   About a minute ago   Exited              create                                   0                   ff7e394cceb98       ingress-nginx-admission-create-mxjll
	a6ec5d719c7f9       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       About a minute ago   Running             local-path-provisioner                   0                   bbff0573cb408       local-path-provisioner-78b46b4d5c-sz9dx
	ab15c7d0ff097       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      About a minute ago   Running             volume-snapshot-controller               0                   d531058a12827       snapshot-controller-58dbcc7b99-v2sjs
	77c6922801f76       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      About a minute ago   Running             volume-snapshot-controller               0                   d55303dc80275       snapshot-controller-58dbcc7b99-kckn7
	964c60f7fac51       marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                                        2 minutes ago        Running             yakd                                     0                   0e0a262d1e8c1       yakd-dashboard-9947fc6bf-vcxng
	fd1a26aad5579       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f                             2 minutes ago        Running             minikube-ingress-dns                     0                   75bb7e20beb33       kube-ingress-dns-minikube
	1a70d50847816       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  3 minutes ago        Running             tiller                                   0                   ccaecca2fa8c1       tiller-deploy-7b677967b9-f52g9
	9f732a2520b4b       gcr.io/cloud-spanner-emulator/emulator@sha256:538fb31f832e76c93f10035cb609c56fc5cd18b3cd85a3ba50699572c3c5dc50                               3 minutes ago        Running             cloud-spanner-emulator                   0                   7cae986e1393c       cloud-spanner-emulator-5446596998-fmjq4
	63189bfcfb9f1       6e38f40d628db                                                                                                                                3 minutes ago        Running             storage-provisioner                      0                   3e428a6a8a8d1       storage-provisioner
	c47becdaa03ad       ead0a4a53df89                                                                                                                                4 minutes ago        Running             coredns                                  0                   27c27027e7676       coredns-5dd5756b68-lwtlt
	20429ac667abf       83f6cc407eed8                                                                                                                                4 minutes ago        Running             kube-proxy                               0                   f1d06d0f6ac8f       kube-proxy-9cxt9
	57868f1b4ef4c       73deb9a3f7025                                                                                                                                4 minutes ago        Running             etcd                                     0                   ad3e3082bb749       etcd-addons-748800
	7655462e641cf       e3db313c6dbc0                                                                                                                                4 minutes ago        Running             kube-scheduler                           0                   83c241f84f031       kube-scheduler-addons-748800
	fbefab8a6824b       d058aa5ab969c                                                                                                                                4 minutes ago        Running             kube-controller-manager                  0                   aac896e13c257       kube-controller-manager-addons-748800
	ebaa22e442009       7fe0e6f37db33                                                                                                                                4 minutes ago        Running             kube-apiserver                           0                   875c3b2ca8eeb       kube-apiserver-addons-748800
	
	
	==> controller_ingress [5ef79efddf03] <==
	W0318 10:42:16.909376       7 client_config.go:618] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0318 10:42:16.910057       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0318 10:42:16.924693       7 main.go:249] "Running in Kubernetes cluster" major="1" minor="28" git="v1.28.4" state="clean" commit="bae2c62678db2b5053817bc97181fcc2e8388103" platform="linux/amd64"
	I0318 10:42:17.123569       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0318 10:42:17.254231       7 ssl.go:536] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0318 10:42:17.278341       7 nginx.go:265] "Starting NGINX Ingress controller"
	I0318 10:42:17.315345       7 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"103e33d4-2bf8-4571-afe4-17a6789ecdf0", APIVersion:"v1", ResourceVersion:"714", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0318 10:42:17.319723       7 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"2c521962-778c-43d6-b682-464efd1d26dd", APIVersion:"v1", ResourceVersion:"715", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0318 10:42:17.319849       7 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"1e1ec613-9244-41f9-a770-ca15ea391487", APIVersion:"v1", ResourceVersion:"716", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0318 10:42:18.481675       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0318 10:42:18.481601       7 nginx.go:308] "Starting NGINX process"
	I0318 10:42:18.482987       7 nginx.go:328] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0318 10:42:18.484543       7 controller.go:190] "Configuration changes detected, backend reload required"
	I0318 10:42:18.556746       7 controller.go:210] "Backend successfully reloaded"
	I0318 10:42:18.557065       7 controller.go:221] "Initial sync, sleeping for 1 second"
	I0318 10:42:18.557421       7 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-76dc478dd8-v9dls", UID:"092112b3-3f0d-46f9-bdb7-e647afd492d0", APIVersion:"v1", ResourceVersion:"1266", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	I0318 10:42:18.586798       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0318 10:42:18.588534       7 status.go:84] "New leader elected" identity="ingress-nginx-controller-76dc478dd8-v9dls"
	I0318 10:42:18.608702       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-76dc478dd8-v9dls" node="addons-748800"
	  Build:         71f78d49f0a496c31d4c19f095469f3f23900f8a
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.3
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [c47becdaa03a] <==
	[INFO] plugin/reload: Running configuration SHA512 = 07d6393480c36cc6b464d3853a5e32028517fcba50e93adef34ce624ca099b3a1e269a86e99bf5086a15610de9e11b2980c233f8d3dcbff38f702488f0fd5328
	[INFO] Reloading complete
	[INFO] 127.0.0.1:54032 - 19480 "HINFO IN 2643129049825001207.7434586045075812714. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.149061494s
	[INFO] 10.244.0.10:45739 - 28138 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000444096s
	[INFO] 10.244.0.10:45739 - 65504 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000119799s
	[INFO] 10.244.0.10:34713 - 22092 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.001185489s
	[INFO] 10.244.0.10:34713 - 560 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000420296s
	[INFO] 10.244.0.10:38001 - 17069 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000116699s
	[INFO] 10.244.0.10:38001 - 50088 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000150398s
	[INFO] 10.244.0.10:48695 - 596 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000306497s
	[INFO] 10.244.0.10:48695 - 23377 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000272797s
	[INFO] 10.244.0.10:50020 - 53745 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000266698s
	[INFO] 10.244.0.10:34334 - 10982 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0000635s
	[INFO] 10.244.0.10:44200 - 3059 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000103099s
	[INFO] 10.244.0.10:36217 - 26527 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000176898s
	[INFO] 10.244.0.22:60893 - 35331 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.0003486s
	[INFO] 10.244.0.22:53371 - 56547 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.0001301s
	[INFO] 10.244.0.22:46126 - 19188 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000097s
	[INFO] 10.244.0.22:46139 - 36000 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.0002256s
	[INFO] 10.244.0.22:43576 - 24312 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000118801s
	[INFO] 10.244.0.22:53488 - 23773 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.0003301s
	[INFO] 10.244.0.22:46986 - 41622 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 192 0.002938803s
	[INFO] 10.244.0.22:50767 - 61807 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 240 0.003035103s
	[INFO] 10.244.0.25:38867 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.0001584s
	[INFO] 10.244.0.25:35768 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.0012155s
	
	
	==> describe nodes <==
	Name:               addons-748800
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-748800
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd
	                    minikube.k8s.io/name=addons-748800
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T10_38_56_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-748800
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-748800"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 10:38:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-748800
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 10:43:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 10:43:03 +0000   Mon, 18 Mar 2024 10:38:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 10:43:03 +0000   Mon, 18 Mar 2024 10:38:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 10:43:03 +0000   Mon, 18 Mar 2024 10:38:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 10:43:03 +0000   Mon, 18 Mar 2024 10:38:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.150.46
	  Hostname:    addons-748800
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912876Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912876Ki
	  pods:               110
	System Info:
	  Machine ID:                 aefc5a42f05b4d7db4c94b2ff6c63a10
	  System UUID:                ebf6cd93-d78b-4549-be49-c599e80167a2
	  Boot ID:                    a244dcce-30ad-4c1d-894e-a6299cd41c28
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5446596998-fmjq4      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	  gadget                      gadget-w4zxv                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	  gcp-auth                    gcp-auth-7d69788767-vn82r                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m34s
	  ingress-nginx               ingress-nginx-controller-76dc478dd8-v9dls    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         3m40s
	  kube-system                 coredns-5dd5756b68-lwtlt                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m12s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 csi-hostpathplugin-jlbnm                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 etcd-addons-748800                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m26s
	  kube-system                 helm-test                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  kube-system                 kube-apiserver-addons-748800                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m25s
	  kube-system                 kube-controller-manager-addons-748800        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m25s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m46s
	  kube-system                 kube-proxy-9cxt9                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m13s
	  kube-system                 kube-scheduler-addons-748800                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m25s
	  kube-system                 snapshot-controller-58dbcc7b99-kckn7         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 snapshot-controller-58dbcc7b99-v2sjs         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 tiller-deploy-7b677967b9-f52g9               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m46s
	  local-path-storage          local-path-provisioner-78b46b4d5c-sz9dx      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-vcxng               0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     3m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             388Mi (10%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m58s                  kube-proxy       
	  Normal  Starting                 4m35s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m35s (x8 over 4m35s)  kubelet          Node addons-748800 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m35s (x8 over 4m35s)  kubelet          Node addons-748800 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m35s (x7 over 4m35s)  kubelet          Node addons-748800 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m25s                  kubelet          Node addons-748800 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m25s                  kubelet          Node addons-748800 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m25s                  kubelet          Node addons-748800 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m24s                  kubelet          Node addons-748800 status is now: NodeReady
	  Normal  RegisteredNode           4m14s                  node-controller  Node addons-748800 event: Registered Node addons-748800 in Controller
	
	
	==> dmesg <==
	[Mar18 10:39] systemd-fstab-generator[3202]: Ignoring "noauto" option for root device
	[  +0.757098] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.619797] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.181871] kauditd_printk_skb: 15 callbacks suppressed
	[ +11.298797] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.034874] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.520714] kauditd_printk_skb: 45 callbacks suppressed
	[  +5.178929] kauditd_printk_skb: 91 callbacks suppressed
	[Mar18 10:40] kauditd_printk_skb: 34 callbacks suppressed
	[ +14.304119] kauditd_printk_skb: 2 callbacks suppressed
	[ +31.335687] kauditd_printk_skb: 2 callbacks suppressed
	[Mar18 10:41] kauditd_printk_skb: 31 callbacks suppressed
	[ +25.037684] kauditd_printk_skb: 5 callbacks suppressed
	[  +6.525287] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.295711] kauditd_printk_skb: 41 callbacks suppressed
	[  +6.044476] kauditd_printk_skb: 16 callbacks suppressed
	[  +6.143890] kauditd_printk_skb: 2 callbacks suppressed
	[Mar18 10:42] kauditd_printk_skb: 18 callbacks suppressed
	[ +16.167409] kauditd_printk_skb: 40 callbacks suppressed
	[  +6.930320] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.055442] kauditd_printk_skb: 31 callbacks suppressed
	[ +13.627771] kauditd_printk_skb: 5 callbacks suppressed
	[  +9.469903] kauditd_printk_skb: 56 callbacks suppressed
	[Mar18 10:43] kauditd_printk_skb: 16 callbacks suppressed
	[  +7.818410] kauditd_printk_skb: 17 callbacks suppressed
	
	
	==> etcd [57868f1b4ef4] <==
	{"level":"warn","ts":"2024-03-18T10:42:15.883775Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"493.08681ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13879"}
	{"level":"info","ts":"2024-03-18T10:42:15.883949Z","caller":"traceutil/trace.go:171","msg":"trace[1538691070] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1254; }","duration":"493.34611ms","start":"2024-03-18T10:42:15.390589Z","end":"2024-03-18T10:42:15.883936Z","steps":["trace[1538691070] 'range keys from in-memory index tree'  (duration: 492.83761ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T10:42:15.884498Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T10:42:15.390539Z","time spent":"493.945111ms","remote":"127.0.0.1:52390","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":13902,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	{"level":"warn","ts":"2024-03-18T10:42:15.885429Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"291.14006ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10918"}
	{"level":"info","ts":"2024-03-18T10:42:15.885522Z","caller":"traceutil/trace.go:171","msg":"trace[332439666] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1254; }","duration":"291.23816ms","start":"2024-03-18T10:42:15.594275Z","end":"2024-03-18T10:42:15.885513Z","steps":["trace[332439666] 'range keys from in-memory index tree'  (duration: 289.200658ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T10:42:48.188391Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"736.579437ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-03-18T10:42:48.188591Z","caller":"traceutil/trace.go:171","msg":"trace[769080845] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1423; }","duration":"736.771537ms","start":"2024-03-18T10:42:47.451785Z","end":"2024-03-18T10:42:48.188557Z","steps":["trace[769080845] 'range keys from in-memory index tree'  (duration: 736.462237ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T10:42:48.188632Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T10:42:47.451759Z","time spent":"736.861037ms","remote":"127.0.0.1:52368","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1136,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2024-03-18T10:42:48.188916Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"647.589344ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2024-03-18T10:42:48.18897Z","caller":"traceutil/trace.go:171","msg":"trace[2085434912] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1423; }","duration":"647.637844ms","start":"2024-03-18T10:42:47.541301Z","end":"2024-03-18T10:42:48.188939Z","steps":["trace[2085434912] 'range keys from in-memory index tree'  (duration: 647.451644ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T10:42:48.188992Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T10:42:47.541279Z","time spent":"647.706744ms","remote":"127.0.0.1:52476","response type":"/etcdserverpb.KV/Range","request count":0,"request size":81,"response count":1,"response size":577,"request content":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" "}
	{"level":"warn","ts":"2024-03-18T10:42:48.189139Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"566.034451ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/gadget.kinvolk.io/traces/\" range_end:\"/registry/gadget.kinvolk.io/traces0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-18T10:42:48.189206Z","caller":"traceutil/trace.go:171","msg":"trace[1325640708] range","detail":"{range_begin:/registry/gadget.kinvolk.io/traces/; range_end:/registry/gadget.kinvolk.io/traces0; response_count:0; response_revision:1423; }","duration":"566.101351ms","start":"2024-03-18T10:42:47.623094Z","end":"2024-03-18T10:42:48.189196Z","steps":["trace[1325640708] 'count revisions from in-memory index tree'  (duration: 565.807851ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T10:42:48.189245Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T10:42:47.623073Z","time spent":"566.149251ms","remote":"127.0.0.1:47788","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":28,"request content":"key:\"/registry/gadget.kinvolk.io/traces/\" range_end:\"/registry/gadget.kinvolk.io/traces0\" count_only:true "}
	{"level":"warn","ts":"2024-03-18T10:42:48.189373Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"486.025357ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/local-path-storage/helper-pod-delete-pvc-78236935-41c5-4ffc-887f-2d2dd3c5a049\" ","response":"range_response_count:1 size:3995"}
	{"level":"info","ts":"2024-03-18T10:42:48.190101Z","caller":"traceutil/trace.go:171","msg":"trace[1437260971] range","detail":"{range_begin:/registry/pods/local-path-storage/helper-pod-delete-pvc-78236935-41c5-4ffc-887f-2d2dd3c5a049; range_end:; response_count:1; response_revision:1423; }","duration":"486.751056ms","start":"2024-03-18T10:42:47.70334Z","end":"2024-03-18T10:42:48.190091Z","steps":["trace[1437260971] 'range keys from in-memory index tree'  (duration: 485.716957ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T10:42:48.190282Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T10:42:47.703319Z","time spent":"486.952856ms","remote":"127.0.0.1:52390","response type":"/etcdserverpb.KV/Range","request count":0,"request size":94,"response count":1,"response size":4018,"request content":"key:\"/registry/pods/local-path-storage/helper-pod-delete-pvc-78236935-41c5-4ffc-887f-2d2dd3c5a049\" "}
	{"level":"warn","ts":"2024-03-18T10:42:48.19075Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"250.039275ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:6088"}
	{"level":"info","ts":"2024-03-18T10:42:48.190894Z","caller":"traceutil/trace.go:171","msg":"trace[301790856] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1423; }","duration":"250.183975ms","start":"2024-03-18T10:42:47.940701Z","end":"2024-03-18T10:42:48.190885Z","steps":["trace[301790856] 'range keys from in-memory index tree'  (duration: 249.800775ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T10:42:48.438673Z","caller":"traceutil/trace.go:171","msg":"trace[2052099112] transaction","detail":"{read_only:false; response_revision:1424; number_of_response:1; }","duration":"335.156566ms","start":"2024-03-18T10:42:48.103493Z","end":"2024-03-18T10:42:48.43865Z","steps":["trace[2052099112] 'process raft request'  (duration: 334.900166ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T10:42:48.438839Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T10:42:48.103471Z","time spent":"335.257266ms","remote":"127.0.0.1:52476","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":677,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-ujpxwzrg5aaoyfykp3rstd7uxm\" mod_revision:1403 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-ujpxwzrg5aaoyfykp3rstd7uxm\" value_size:604 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-ujpxwzrg5aaoyfykp3rstd7uxm\" > >"}
	{"level":"info","ts":"2024-03-18T10:42:48.444782Z","caller":"traceutil/trace.go:171","msg":"trace[828962610] transaction","detail":"{read_only:false; response_revision:1426; number_of_response:1; }","duration":"226.127077ms","start":"2024-03-18T10:42:48.218633Z","end":"2024-03-18T10:42:48.44476Z","steps":["trace[828962610] 'process raft request'  (duration: 226.057177ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T10:42:48.445363Z","caller":"traceutil/trace.go:171","msg":"trace[380428076] transaction","detail":"{read_only:false; response_revision:1425; number_of_response:1; }","duration":"226.715977ms","start":"2024-03-18T10:42:48.218633Z","end":"2024-03-18T10:42:48.445349Z","steps":["trace[380428076] 'process raft request'  (duration: 225.550177ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T10:42:48.69975Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"160.989884ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/registry-proxy-htczk.17bdd5ae328f2230\" ","response":"range_response_count:1 size:811"}
	{"level":"info","ts":"2024-03-18T10:42:48.699824Z","caller":"traceutil/trace.go:171","msg":"trace[588823040] range","detail":"{range_begin:/registry/events/kube-system/registry-proxy-htczk.17bdd5ae328f2230; range_end:; response_count:1; response_revision:1426; }","duration":"161.074784ms","start":"2024-03-18T10:42:48.538736Z","end":"2024-03-18T10:42:48.699811Z","steps":["trace[588823040] 'range keys from in-memory index tree'  (duration: 160.840784ms)"],"step_count":1}
	
	
	==> gcp-auth [b2e0e12bc248] <==
	2024/03/18 10:42:20 GCP Auth Webhook started!
	2024/03/18 10:42:22 Ready to marshal response ...
	2024/03/18 10:42:22 Ready to write response ...
	2024/03/18 10:42:22 Ready to marshal response ...
	2024/03/18 10:42:22 Ready to write response ...
	2024/03/18 10:42:32 Ready to marshal response ...
	2024/03/18 10:42:32 Ready to write response ...
	2024/03/18 10:42:33 Ready to marshal response ...
	2024/03/18 10:42:33 Ready to write response ...
	2024/03/18 10:42:46 Ready to marshal response ...
	2024/03/18 10:42:46 Ready to write response ...
	2024/03/18 10:43:07 Ready to marshal response ...
	2024/03/18 10:43:07 Ready to write response ...
	2024/03/18 10:43:17 Ready to marshal response ...
	2024/03/18 10:43:17 Ready to write response ...
	
	
	==> kernel <==
	 10:43:22 up 6 min,  0 users,  load average: 3.38, 2.90, 1.35
	Linux addons-748800 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ebaa22e44200] <==
	Trace[1740035336]: [562.565579ms] [562.565579ms] END
	E0318 10:40:52.656351       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	I0318 10:40:52.677084       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0318 10:41:21.053162       1 trace.go:236] Trace[2063606790]: "Get" accept:application/json, */*,audit-id:9e1088b2-030a-4996-b490-6ca46efcf81e,client:172.25.150.46,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (18-Mar-2024 10:41:20.540) (total time: 512ms):
	Trace[2063606790]: ---"About to write a response" 512ms (10:41:21.053)
	Trace[2063606790]: [512.665782ms] [512.665782ms] END
	I0318 10:41:21.062636       1 trace.go:236] Trace[843133693]: "List" accept:application/json, */*,audit-id:6337460e-c10a-4f80-8e01-72d0ee7feaab,client:172.25.144.1,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/kube-system/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (18-Mar-2024 10:41:20.499) (total time: 562ms):
	Trace[843133693]: ["List(recursive=true) etcd3" audit-id:6337460e-c10a-4f80-8e01-72d0ee7feaab,key:/pods/kube-system,resourceVersion:,resourceVersionMatch:,limit:0,continue: 562ms (10:41:20.499)]
	Trace[843133693]: [562.915628ms] [562.915628ms] END
	I0318 10:41:33.395975       1 trace.go:236] Trace[117657465]: "List" accept:application/json, */*,audit-id:f4009bb6-8807-4740-9725-3b24557f6aed,client:172.25.144.1,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/ingress-nginx/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (18-Mar-2024 10:41:32.885) (total time: 510ms):
	Trace[117657465]: ["List(recursive=true) etcd3" audit-id:f4009bb6-8807-4740-9725-3b24557f6aed,key:/pods/ingress-nginx,resourceVersion:,resourceVersionMatch:,limit:0,continue: 510ms (10:41:32.885)]
	Trace[117657465]: [510.80698ms] [510.80698ms] END
	I0318 10:41:51.657918       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0318 10:42:14.983321       1 trace.go:236] Trace[836660305]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.25.150.46,type:*v1.Endpoints,resource:apiServerIPInfo (18-Mar-2024 10:42:14.253) (total time: 729ms):
	Trace[836660305]: ---"Transaction prepared" 229ms (10:42:14.485)
	Trace[836660305]: ---"Txn call completed" 497ms (10:42:14.983)
	Trace[836660305]: [729.485318ms] [729.485318ms] END
	I0318 10:42:48.192368       1 trace.go:236] Trace[805634223]: "Get" accept:application/json, */*,audit-id:8a62331e-aea9-4aa5-8de2-38dbc41bc60b,client:10.244.0.17,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/external-health-monitor-leader-hostpath-csi-k8s-io,user-agent:csi-external-health-monitor-controller/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (18-Mar-2024 10:42:47.540) (total time: 651ms):
	Trace[805634223]: ---"About to write a response" 651ms (10:42:48.192)
	Trace[805634223]: [651.938743ms] [651.938743ms] END
	I0318 10:42:48.192867       1 trace.go:236] Trace[2053892309]: "Get" accept:application/json, */*,audit-id:c23be52b-571d-4989-ad0a-f596164b6733,client:172.25.150.46,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (18-Mar-2024 10:42:47.451) (total time: 741ms):
	Trace[2053892309]: ---"About to write a response" 741ms (10:42:48.192)
	Trace[2053892309]: [741.759236ms] [741.759236ms] END
	I0318 10:42:51.657323       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0318 10:42:58.184850       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [fbefab8a6824] <==
	I0318 10:41:48.374597       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0318 10:41:48.394747       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0318 10:41:48.400249       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0318 10:41:48.401962       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0318 10:42:17.055115       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0318 10:42:17.155644       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="76.401µs"
	I0318 10:42:17.195256       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0318 10:42:18.023937       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0318 10:42:18.112136       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0318 10:42:21.391647       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-7d69788767" duration="18.51482ms"
	I0318 10:42:21.392864       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-7d69788767" duration="31.8µs"
	I0318 10:42:22.314542       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I0318 10:42:22.386236       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0318 10:42:22.722543       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0318 10:42:22.726342       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0318 10:42:22.726855       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0318 10:42:33.440663       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0318 10:42:35.635953       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="39.636896ms"
	I0318 10:42:35.641614       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="63.9µs"
	I0318 10:42:58.389981       1 replica_set.go:676] "Finished syncing" kind="ReplicationController" key="kube-system/registry" duration="9.8µs"
	I0318 10:43:01.157770       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0318 10:43:01.163865       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0318 10:43:05.403894       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-78b46b4d5c" duration="7.5µs"
	I0318 10:43:07.223049       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0318 10:43:11.674734       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-69cf46c98" duration="5.5µs"
	
	
	==> kube-proxy [20429ac667ab] <==
	I0318 10:39:21.954565       1 server_others.go:69] "Using iptables proxy"
	I0318 10:39:22.230124       1 node.go:141] Successfully retrieved node IP: 172.25.150.46
	I0318 10:39:22.442843       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 10:39:22.442912       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 10:39:22.475902       1 server_others.go:152] "Using iptables Proxier"
	I0318 10:39:22.476011       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 10:39:22.487033       1 server.go:846] "Version info" version="v1.28.4"
	I0318 10:39:22.487137       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 10:39:22.491081       1 config.go:188] "Starting service config controller"
	I0318 10:39:22.491204       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 10:39:22.491559       1 config.go:97] "Starting endpoint slice config controller"
	I0318 10:39:22.491771       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 10:39:22.503699       1 config.go:315] "Starting node config controller"
	I0318 10:39:22.503751       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 10:39:22.591964       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 10:39:22.593172       1 shared_informer.go:318] Caches are synced for service config
	I0318 10:39:22.604039       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [7655462e641c] <==
	W0318 10:38:52.848485       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0318 10:38:52.848507       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0318 10:38:52.852683       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0318 10:38:52.852706       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0318 10:38:52.884921       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 10:38:52.885038       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0318 10:38:52.948411       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 10:38:52.948963       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 10:38:53.003767       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0318 10:38:53.003799       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0318 10:38:53.040851       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0318 10:38:53.040962       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0318 10:38:53.078601       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0318 10:38:53.078650       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0318 10:38:53.115384       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0318 10:38:53.115877       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0318 10:38:53.143980       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 10:38:53.144070       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 10:38:53.333223       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0318 10:38:53.333426       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0318 10:38:53.345662       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0318 10:38:53.345832       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0318 10:38:53.376574       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 10:38:53.377364       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 10:38:55.506027       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 18 10:43:14 addons-748800 kubelet[2848]: I0318 10:43:14.083675    2848 scope.go:117] "RemoveContainer" containerID="cf8560b9aaa228a0bb5e13bccb0ae3d5af16c73c1486ce0f69ccda4ead8c5d30"
	Mar 18 10:43:14 addons-748800 kubelet[2848]: I0318 10:43:14.554098    2848 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="9ca72841-3a3f-4495-be4a-eaa6cfc05271" path="/var/lib/kubelet/pods/9ca72841-3a3f-4495-be4a-eaa6cfc05271/volumes"
	Mar 18 10:43:16 addons-748800 kubelet[2848]: I0318 10:43:16.615674    2848 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgscf\" (UniqueName: \"kubernetes.io/projected/d437966e-b607-4c11-a40d-4ddfd4c1200d-kube-api-access-xgscf\") pod \"d437966e-b607-4c11-a40d-4ddfd4c1200d\" (UID: \"d437966e-b607-4c11-a40d-4ddfd4c1200d\") "
	Mar 18 10:43:16 addons-748800 kubelet[2848]: I0318 10:43:16.616051    2848 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^4e65c8dd-e514-11ee-8e8f-1a05e3d068da\") pod \"d437966e-b607-4c11-a40d-4ddfd4c1200d\" (UID: \"d437966e-b607-4c11-a40d-4ddfd4c1200d\") "
	Mar 18 10:43:16 addons-748800 kubelet[2848]: I0318 10:43:16.616153    2848 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/d437966e-b607-4c11-a40d-4ddfd4c1200d-gcp-creds\") pod \"d437966e-b607-4c11-a40d-4ddfd4c1200d\" (UID: \"d437966e-b607-4c11-a40d-4ddfd4c1200d\") "
	Mar 18 10:43:16 addons-748800 kubelet[2848]: I0318 10:43:16.616259    2848 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d437966e-b607-4c11-a40d-4ddfd4c1200d-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "d437966e-b607-4c11-a40d-4ddfd4c1200d" (UID: "d437966e-b607-4c11-a40d-4ddfd4c1200d"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Mar 18 10:43:16 addons-748800 kubelet[2848]: I0318 10:43:16.625658    2848 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^4e65c8dd-e514-11ee-8e8f-1a05e3d068da" (OuterVolumeSpecName: "task-pv-storage") pod "d437966e-b607-4c11-a40d-4ddfd4c1200d" (UID: "d437966e-b607-4c11-a40d-4ddfd4c1200d"). InnerVolumeSpecName "pvc-1658a737-8d19-4839-ab91-c5af065c840f". PluginName "kubernetes.io/csi", VolumeGidValue ""
	Mar 18 10:43:16 addons-748800 kubelet[2848]: I0318 10:43:16.625854    2848 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d437966e-b607-4c11-a40d-4ddfd4c1200d-kube-api-access-xgscf" (OuterVolumeSpecName: "kube-api-access-xgscf") pod "d437966e-b607-4c11-a40d-4ddfd4c1200d" (UID: "d437966e-b607-4c11-a40d-4ddfd4c1200d"). InnerVolumeSpecName "kube-api-access-xgscf". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 18 10:43:16 addons-748800 kubelet[2848]: I0318 10:43:16.717417    2848 reconciler_common.go:293] "operationExecutor.UnmountDevice started for volume \"pvc-1658a737-8d19-4839-ab91-c5af065c840f\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^4e65c8dd-e514-11ee-8e8f-1a05e3d068da\") on node \"addons-748800\" "
	Mar 18 10:43:16 addons-748800 kubelet[2848]: I0318 10:43:16.717582    2848 reconciler_common.go:300] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/d437966e-b607-4c11-a40d-4ddfd4c1200d-gcp-creds\") on node \"addons-748800\" DevicePath \"\""
	Mar 18 10:43:16 addons-748800 kubelet[2848]: I0318 10:43:16.717603    2848 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-xgscf\" (UniqueName: \"kubernetes.io/projected/d437966e-b607-4c11-a40d-4ddfd4c1200d-kube-api-access-xgscf\") on node \"addons-748800\" DevicePath \"\""
	Mar 18 10:43:16 addons-748800 kubelet[2848]: I0318 10:43:16.725753    2848 operation_generator.go:996] UnmountDevice succeeded for volume "pvc-1658a737-8d19-4839-ab91-c5af065c840f" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^4e65c8dd-e514-11ee-8e8f-1a05e3d068da") on node "addons-748800"
	Mar 18 10:43:16 addons-748800 kubelet[2848]: I0318 10:43:16.818490    2848 reconciler_common.go:300] "Volume detached for volume \"pvc-1658a737-8d19-4839-ab91-c5af065c840f\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^4e65c8dd-e514-11ee-8e8f-1a05e3d068da\") on node \"addons-748800\" DevicePath \"\""
	Mar 18 10:43:17 addons-748800 kubelet[2848]: I0318 10:43:17.220641    2848 scope.go:117] "RemoveContainer" containerID="c164f57220e8b94279f33733e614ed77c62e6fe3215cae8130c90464e43bf8b0"
	Mar 18 10:43:17 addons-748800 kubelet[2848]: I0318 10:43:17.279339    2848 scope.go:117] "RemoveContainer" containerID="c164f57220e8b94279f33733e614ed77c62e6fe3215cae8130c90464e43bf8b0"
	Mar 18 10:43:17 addons-748800 kubelet[2848]: E0318 10:43:17.280947    2848 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: c164f57220e8b94279f33733e614ed77c62e6fe3215cae8130c90464e43bf8b0" containerID="c164f57220e8b94279f33733e614ed77c62e6fe3215cae8130c90464e43bf8b0"
	Mar 18 10:43:17 addons-748800 kubelet[2848]: I0318 10:43:17.280993    2848 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"c164f57220e8b94279f33733e614ed77c62e6fe3215cae8130c90464e43bf8b0"} err="failed to get container status \"c164f57220e8b94279f33733e614ed77c62e6fe3215cae8130c90464e43bf8b0\": rpc error: code = Unknown desc = Error response from daemon: No such container: c164f57220e8b94279f33733e614ed77c62e6fe3215cae8130c90464e43bf8b0"
	Mar 18 10:43:17 addons-748800 kubelet[2848]: I0318 10:43:17.322304    2848 topology_manager.go:215] "Topology Admit Handler" podUID="7a0edd5a-058f-4315-9503-36f32cdd34d7" podNamespace="kube-system" podName="helm-test"
	Mar 18 10:43:17 addons-748800 kubelet[2848]: E0318 10:43:17.322624    2848 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d437966e-b607-4c11-a40d-4ddfd4c1200d" containerName="task-pv-container"
	Mar 18 10:43:17 addons-748800 kubelet[2848]: E0318 10:43:17.322650    2848 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9ca72841-3a3f-4495-be4a-eaa6cfc05271" containerName="metrics-server"
	Mar 18 10:43:17 addons-748800 kubelet[2848]: I0318 10:43:17.322739    2848 memory_manager.go:346] "RemoveStaleState removing state" podUID="d437966e-b607-4c11-a40d-4ddfd4c1200d" containerName="task-pv-container"
	Mar 18 10:43:17 addons-748800 kubelet[2848]: I0318 10:43:17.322807    2848 memory_manager.go:346] "RemoveStaleState removing state" podUID="9ca72841-3a3f-4495-be4a-eaa6cfc05271" containerName="metrics-server"
	Mar 18 10:43:17 addons-748800 kubelet[2848]: I0318 10:43:17.431410    2848 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tftqx\" (UniqueName: \"kubernetes.io/projected/7a0edd5a-058f-4315-9503-36f32cdd34d7-kube-api-access-tftqx\") pod \"helm-test\" (UID: \"7a0edd5a-058f-4315-9503-36f32cdd34d7\") " pod="kube-system/helm-test"
	Mar 18 10:43:17 addons-748800 kubelet[2848]: I0318 10:43:17.626410    2848 kubelet_pods.go:906] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/helm-test" secret="" err="secret \"gcp-auth\" not found"
	Mar 18 10:43:18 addons-748800 kubelet[2848]: I0318 10:43:18.564730    2848 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d437966e-b607-4c11-a40d-4ddfd4c1200d" path="/var/lib/kubelet/pods/d437966e-b607-4c11-a40d-4ddfd4c1200d/volumes"
	
	
	==> storage-provisioner [63189bfcfb9f] <==
	I0318 10:39:44.084398       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0318 10:39:44.533377       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0318 10:39:44.533538       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0318 10:39:44.570744       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0318 10:39:44.570897       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-748800_ea2767cb-ae1a-4b50-9f70-b31f13575dd9!
	I0318 10:39:44.578102       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"efce4346-bde9-4f9f-9bb3-9d344a6d6013", APIVersion:"v1", ResourceVersion:"787", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-748800_ea2767cb-ae1a-4b50-9f70-b31f13575dd9 became leader
	I0318 10:39:44.872575       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-748800_ea2767cb-ae1a-4b50-9f70-b31f13575dd9!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 10:43:12.412694   10908 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-748800 -n addons-748800
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-748800 -n addons-748800: (13.3596874s)
helpers_test.go:261: (dbg) Run:  kubectl --context addons-748800 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-mxjll ingress-nginx-admission-patch-w89zm
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-748800 describe pod ingress-nginx-admission-create-mxjll ingress-nginx-admission-patch-w89zm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-748800 describe pod ingress-nginx-admission-create-mxjll ingress-nginx-admission-patch-w89zm: exit status 1 (165.9218ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-mxjll" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-w89zm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-748800 describe pod ingress-nginx-admission-create-mxjll ingress-nginx-admission-patch-w89zm: exit status 1
--- FAIL: TestAddons/parallel/Registry (75.82s)

                                                
                                    
x
+
TestCertExpiration (1355.31s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-387900 --memory=2048 --cert-expiration=3m --driver=hyperv
E0318 13:11:12.885776    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-387900 --memory=2048 --cert-expiration=3m --driver=hyperv: (8m46.8809991s)
E0318 13:19:49.641469    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-387900 --memory=2048 --cert-expiration=8760h --driver=hyperv
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p cert-expiration-387900 --memory=2048 --cert-expiration=8760h --driver=hyperv: exit status 90 (6m25.0896945s)

                                                
                                                
-- stdout --
	* [cert-expiration-387900] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18431
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "cert-expiration-387900" primary control-plane node in "cert-expiration-387900" cluster
	* Updating the running hyperv "cert-expiration-387900" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 13:22:22.961442    4280 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! This VM is having trouble accessing https://registry.k8s.io
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Mar 18 13:17:57 cert-expiration-387900 systemd[1]: Starting Docker Application Container Engine...
	Mar 18 13:17:57 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:17:57.620341409Z" level=info msg="Starting up"
	Mar 18 13:17:57 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:17:57.621542505Z" level=info msg="containerd not running, starting managed containerd"
	Mar 18 13:17:57 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:17:57.623053100Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=671
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.657832886Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.690149779Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.690194679Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.690293679Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.690314079Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.690420078Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.690444878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.691444575Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.691495575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.691578975Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.691599575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.691724274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.692031373Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.695395562Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.695504962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.695825761Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.695922960Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.696035560Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.696264559Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.696356959Z" level=info msg="metadata content store policy set" policy=shared
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.720137881Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.720288880Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.720315580Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.720408180Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.720424380Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.720619579Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721159277Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721308077Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721353277Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721371177Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721386777Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721413177Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721426577Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721441276Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721477676Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721491876Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721505776Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721518476Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721545476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721561876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721641876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721665376Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721680176Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721694676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721709276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721724076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721738876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721755275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721768575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721781875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721795175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721818675Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721842275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721872675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721884675Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721933775Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721973375Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721989475Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.722002875Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.722137074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.722175774Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.722190674Z" level=info msg="NRI interface is disabled by configuration."
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.722600673Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.722670872Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.722744372Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.722766272Z" level=info msg="containerd successfully booted in 0.066468s"
	Mar 18 13:17:58 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:17:58.692577720Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Mar 18 13:17:58 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:17:58.733722894Z" level=info msg="Loading containers: start."
	Mar 18 13:17:59 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:17:59.052398798Z" level=info msg="Loading containers: done."
	Mar 18 13:17:59 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:17:59.081006743Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	Mar 18 13:17:59 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:17:59.081318551Z" level=info msg="Daemon has completed initialization"
	Mar 18 13:17:59 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:17:59.210511613Z" level=info msg="API listen on /var/run/docker.sock"
	Mar 18 13:17:59 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:17:59.210562115Z" level=info msg="API listen on [::]:2376"
	Mar 18 13:17:59 cert-expiration-387900 systemd[1]: Started Docker Application Container Engine.
	Mar 18 13:18:32 cert-expiration-387900 systemd[1]: Stopping Docker Application Container Engine...
	Mar 18 13:18:32 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:18:32.181291454Z" level=info msg="Processing signal 'terminated'"
	Mar 18 13:18:32 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:18:32.183919460Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Mar 18 13:18:32 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:18:32.184895463Z" level=info msg="Daemon shutdown complete"
	Mar 18 13:18:32 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:18:32.185091563Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Mar 18 13:18:32 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:18:32.185188264Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Mar 18 13:18:33 cert-expiration-387900 systemd[1]: docker.service: Deactivated successfully.
	Mar 18 13:18:33 cert-expiration-387900 systemd[1]: Stopped Docker Application Container Engine.
	Mar 18 13:18:33 cert-expiration-387900 systemd[1]: Starting Docker Application Container Engine...
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:33.276481389Z" level=info msg="Starting up"
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:33.277615492Z" level=info msg="containerd not running, starting managed containerd"
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:33.279088195Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1027
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.312376178Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.344841260Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.345000860Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.345090960Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.345168160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.345217960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.345234861Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.345473461Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.345595261Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.345634462Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.345645162Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.345671662Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.345833962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.349587271Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.349712772Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.349891672Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.349989072Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.350071173Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.350172773Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.350188773Z" level=info msg="metadata content store policy set" policy=shared
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.350532074Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.350654974Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.350696274Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.350714374Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.350736774Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.350790774Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351236076Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351387476Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351410876Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351427276Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351446376Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351482776Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351498476Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351610176Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351867177Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351892377Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351914477Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351929677Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351952677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351970077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351984977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352001377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352056478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352094578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352109978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352125278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352193278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352222278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352237478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352251978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352266878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352290178Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352330678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352345578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352358878Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352408878Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352451479Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352466579Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352479079Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352560879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352601679Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352619679Z" level=info msg="NRI interface is disabled by configuration."
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352949180Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.353334681Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.353417081Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.353477481Z" level=info msg="containerd successfully booted in 0.042179s"
	Mar 18 13:18:34 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:34.322595801Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Mar 18 13:18:34 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:34.347570064Z" level=info msg="Loading containers: start."
	Mar 18 13:18:34 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:34.551505373Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Mar 18 13:18:34 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:34.641652798Z" level=info msg="Loading containers: done."
	Mar 18 13:18:34 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:34.666864961Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	Mar 18 13:18:34 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:34.667108862Z" level=info msg="Daemon has completed initialization"
	Mar 18 13:18:34 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:34.713775878Z" level=info msg="API listen on /var/run/docker.sock"
	Mar 18 13:18:34 cert-expiration-387900 systemd[1]: Started Docker Application Container Engine.
	Mar 18 13:18:34 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:34.716481285Z" level=info msg="API listen on [::]:2376"
	Mar 18 13:18:49 cert-expiration-387900 systemd[1]: Stopping Docker Application Container Engine...
	Mar 18 13:18:49 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:49.752791635Z" level=info msg="Processing signal 'terminated'"
	Mar 18 13:18:50 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:50.106980920Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Mar 18 13:18:50 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:50.107337821Z" level=info msg="Daemon shutdown complete"
	Mar 18 13:18:50 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:50.107570521Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Mar 18 13:18:50 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:50.107792222Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Mar 18 13:18:51 cert-expiration-387900 systemd[1]: docker.service: Deactivated successfully.
	Mar 18 13:18:51 cert-expiration-387900 systemd[1]: Stopped Docker Application Container Engine.
	Mar 18 13:18:51 cert-expiration-387900 systemd[1]: Starting Docker Application Container Engine...
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:18:51.195324938Z" level=info msg="Starting up"
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:18:51.196285540Z" level=info msg="containerd not running, starting managed containerd"
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:18:51.197688743Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1343
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.230756526Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.264354710Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.264489510Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.264547110Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.264566011Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.264600411Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.264618211Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.264832311Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.264943011Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.264967612Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.264980712Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.265060412Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.265472713Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.270127524Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.270252525Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.270758126Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.270859926Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.270903626Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.270925426Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.270939326Z" level=info msg="metadata content store policy set" policy=shared
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.271178427Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.271343627Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.271370027Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.271390828Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.271408628Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.271465328Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.271807529Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.271966329Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272113529Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272144429Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272161229Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272231630Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272260230Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272277330Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272294130Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272317730Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272335030Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272348630Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272372330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272388530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272405830Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272421430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272435730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272450430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272464330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272479530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272494930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272512730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272527230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272542030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272556430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272574231Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272597731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272616631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272632431Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272675231Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272693631Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272706731Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272720331Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272803831Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272827531Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272841031Z" level=info msg="NRI interface is disabled by configuration."
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.273144732Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.273294232Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.273441333Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.273545533Z" level=info msg="containerd successfully booted in 0.044143s"
	Mar 18 13:18:52 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:18:52.245630384Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Mar 18 13:18:52 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:18:52.478605590Z" level=info msg="Loading containers: start."
	Mar 18 13:18:52 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:18:52.713446595Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Mar 18 13:18:52 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:18:52.800966497Z" level=info msg="Loading containers: done."
	Mar 18 13:18:52 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:18:52.831475898Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	Mar 18 13:18:52 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:18:52.831691298Z" level=info msg="Daemon has completed initialization"
	Mar 18 13:18:52 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:18:52.874478599Z" level=info msg="API listen on /var/run/docker.sock"
	Mar 18 13:18:52 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:18:52.874579399Z" level=info msg="API listen on [::]:2376"
	Mar 18 13:18:52 cert-expiration-387900 systemd[1]: Started Docker Application Container Engine.
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.589006262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.589151162Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.589164362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.589375062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.647630027Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.647745427Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.647765827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.647942028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.768505762Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.769533963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.769960563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.770878864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.780983976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.781465676Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.781487576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.781599776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.140848788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.141565289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.141721590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.142141690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.191467349Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.191729350Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.191814250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.192096150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.342914431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.343085831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.343107131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.343615832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.416665819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.416953120Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.417357520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.418664122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:23 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:23.904111013Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 13:19:23 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:23.906724319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 13:19:23 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:23.906845519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:23 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:23.907431320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.004985431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.005453432Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.005591033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.006078534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.299992676Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.301494179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.302176280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.303120082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.488396889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.488470289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.488488889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.488735390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.765917699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.765987599Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.766001799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.766167600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.961269829Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.961429429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.961467630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.961692730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:27:36 cert-expiration-387900 systemd[1]: Stopping Docker Application Container Engine...
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:36.330645953Z" level=info msg="Processing signal 'terminated'"
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:36.586138504Z" level=info msg="ignoring event" container=96fd0abbc72f21ee1c7509c9acebc203af33ad25e6adecb6e2152237e009d579 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.588066411Z" level=info msg="shim disconnected" id=96fd0abbc72f21ee1c7509c9acebc203af33ad25e6adecb6e2152237e009d579 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.589278815Z" level=warning msg="cleaning up after shim disconnected" id=96fd0abbc72f21ee1c7509c9acebc203af33ad25e6adecb6e2152237e009d579 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.589367916Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:36.635038386Z" level=info msg="ignoring event" container=c1007e91b363d1dacba626aad982b9a5de8bada51e5670fa92c7d13946c79d49 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.636093590Z" level=info msg="shim disconnected" id=c1007e91b363d1dacba626aad982b9a5de8bada51e5670fa92c7d13946c79d49 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.637171494Z" level=warning msg="cleaning up after shim disconnected" id=c1007e91b363d1dacba626aad982b9a5de8bada51e5670fa92c7d13946c79d49 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.637326694Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:36.639408502Z" level=info msg="ignoring event" container=4cced86914b298bd779418f3daf0d6e310a61bd484ebb1279735b35cdba6da74 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.640554006Z" level=info msg="shim disconnected" id=4cced86914b298bd779418f3daf0d6e310a61bd484ebb1279735b35cdba6da74 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.640703307Z" level=warning msg="cleaning up after shim disconnected" id=4cced86914b298bd779418f3daf0d6e310a61bd484ebb1279735b35cdba6da74 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.640840607Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:36.662626788Z" level=info msg="ignoring event" container=b6c5311bd5083719cc6fadcb45665e0b05f1bbd649ed570184de785450c69a83 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.664633096Z" level=info msg="shim disconnected" id=b6c5311bd5083719cc6fadcb45665e0b05f1bbd649ed570184de785450c69a83 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.665045997Z" level=warning msg="cleaning up after shim disconnected" id=b6c5311bd5083719cc6fadcb45665e0b05f1bbd649ed570184de785450c69a83 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.665164998Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.681329058Z" level=info msg="shim disconnected" id=4cce9ff9cd0b8ae48ebe643ac4ca60469f97cee70bccacc419be12f89fd1473e namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.681405058Z" level=warning msg="cleaning up after shim disconnected" id=4cce9ff9cd0b8ae48ebe643ac4ca60469f97cee70bccacc419be12f89fd1473e namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.681419358Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:36.681798860Z" level=info msg="ignoring event" container=4cce9ff9cd0b8ae48ebe643ac4ca60469f97cee70bccacc419be12f89fd1473e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.712509774Z" level=info msg="shim disconnected" id=f7ad92eae7109ce0c7b18dd4d8fe5f5ca578a22ad5b4e527947b9c3a1575e380 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.713396177Z" level=warning msg="cleaning up after shim disconnected" id=f7ad92eae7109ce0c7b18dd4d8fe5f5ca578a22ad5b4e527947b9c3a1575e380 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.713579378Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:36.720299403Z" level=info msg="ignoring event" container=a2a9b74dae2a1b6089bb6b5d372c56e9962447766eb9bc2ac3b5ce33da4d58ae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:36.720502803Z" level=info msg="ignoring event" container=f7ad92eae7109ce0c7b18dd4d8fe5f5ca578a22ad5b4e527947b9c3a1575e380 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:36.720616804Z" level=info msg="ignoring event" container=317ae2407d998077804175f5f80858ac9fc790baa6bbc8cf13cdcbfa716735e9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.726836427Z" level=info msg="shim disconnected" id=317ae2407d998077804175f5f80858ac9fc790baa6bbc8cf13cdcbfa716735e9 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.727084528Z" level=warning msg="cleaning up after shim disconnected" id=317ae2407d998077804175f5f80858ac9fc790baa6bbc8cf13cdcbfa716735e9 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.727283929Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:36.748058206Z" level=info msg="ignoring event" container=eab2cbbcda48a79953a341e7aa0076e7ca044d2c21b0f0258fb95d61a7503f39 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.748532408Z" level=info msg="shim disconnected" id=eab2cbbcda48a79953a341e7aa0076e7ca044d2c21b0f0258fb95d61a7503f39 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:36.752572823Z" level=info msg="ignoring event" container=3796fb6d711c31c1713a8e64c2d103ba70403de92dc92ac969dad35e8f99a0f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.754433530Z" level=warning msg="cleaning up after shim disconnected" id=eab2cbbcda48a79953a341e7aa0076e7ca044d2c21b0f0258fb95d61a7503f39 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.757317740Z" level=info msg="shim disconnected" id=f47267364955e899eaab5a77e8c509601f8ce110d2a5c9a89d381014e55530cb namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:36.766544075Z" level=info msg="ignoring event" container=f47267364955e899eaab5a77e8c509601f8ce110d2a5c9a89d381014e55530cb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.770650690Z" level=warning msg="cleaning up after shim disconnected" id=f47267364955e899eaab5a77e8c509601f8ce110d2a5c9a89d381014e55530cb namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.770997291Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.766175973Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.749672112Z" level=info msg="shim disconnected" id=a2a9b74dae2a1b6089bb6b5d372c56e9962447766eb9bc2ac3b5ce33da4d58ae namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.790886365Z" level=warning msg="cleaning up after shim disconnected" id=a2a9b74dae2a1b6089bb6b5d372c56e9962447766eb9bc2ac3b5ce33da4d58ae namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.790911665Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.754349629Z" level=info msg="shim disconnected" id=3796fb6d711c31c1713a8e64c2d103ba70403de92dc92ac969dad35e8f99a0f3 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.798506394Z" level=warning msg="cleaning up after shim disconnected" id=3796fb6d711c31c1713a8e64c2d103ba70403de92dc92ac969dad35e8f99a0f3 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.798581694Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.860606425Z" level=info msg="shim disconnected" id=b4bcc982a59e70008c9f4f9f8323921cfa098f71240b74a76cacf79fe5f26311 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:36.862693732Z" level=info msg="ignoring event" container=b4bcc982a59e70008c9f4f9f8323921cfa098f71240b74a76cacf79fe5f26311 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.862864633Z" level=warning msg="cleaning up after shim disconnected" id=b4bcc982a59e70008c9f4f9f8323921cfa098f71240b74a76cacf79fe5f26311 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.863100134Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 13:27:41 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:41.533158805Z" level=info msg="ignoring event" container=6a7c758b23ead7637582c14d5058f8d4a068f024688bd1fbb8265e4b2711b55b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 13:27:41 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:41.534832011Z" level=info msg="shim disconnected" id=6a7c758b23ead7637582c14d5058f8d4a068f024688bd1fbb8265e4b2711b55b namespace=moby
	Mar 18 13:27:41 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:41.534930812Z" level=warning msg="cleaning up after shim disconnected" id=6a7c758b23ead7637582c14d5058f8d4a068f024688bd1fbb8265e4b2711b55b namespace=moby
	Mar 18 13:27:41 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:41.534946712Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 13:27:46 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:46.447248733Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=2a4e70b9f61c04bd03ed464ddc66786513fe7a5fabcaf411974aebc70a30fa21
	Mar 18 13:27:46 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:46.506990536Z" level=info msg="ignoring event" container=2a4e70b9f61c04bd03ed464ddc66786513fe7a5fabcaf411974aebc70a30fa21 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 13:27:46 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:46.507485644Z" level=info msg="shim disconnected" id=2a4e70b9f61c04bd03ed464ddc66786513fe7a5fabcaf411974aebc70a30fa21 namespace=moby
	Mar 18 13:27:46 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:46.507589846Z" level=warning msg="cleaning up after shim disconnected" id=2a4e70b9f61c04bd03ed464ddc66786513fe7a5fabcaf411974aebc70a30fa21 namespace=moby
	Mar 18 13:27:46 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:46.507610146Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 13:27:46 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:46.530843097Z" level=warning msg="cleanup warnings time=\"2024-03-18T13:27:46Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Mar 18 13:27:46 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:46.594849465Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Mar 18 13:27:46 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:46.595644777Z" level=info msg="Daemon shutdown complete"
	Mar 18 13:27:46 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:46.595966682Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Mar 18 13:27:46 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:46.596396089Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Mar 18 13:27:47 cert-expiration-387900 systemd[1]: docker.service: Deactivated successfully.
	Mar 18 13:27:47 cert-expiration-387900 systemd[1]: Stopped Docker Application Container Engine.
	Mar 18 13:27:47 cert-expiration-387900 systemd[1]: docker.service: Consumed 19.309s CPU time.
	Mar 18 13:27:47 cert-expiration-387900 systemd[1]: Starting Docker Application Container Engine...
	Mar 18 13:27:47 cert-expiration-387900 dockerd[7408]: time="2024-03-18T13:27:47.716462627Z" level=info msg="Starting up"
	Mar 18 13:28:47 cert-expiration-387900 dockerd[7408]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Mar 18 13:28:47 cert-expiration-387900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Mar 18 13:28:47 cert-expiration-387900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Mar 18 13:28:47 cert-expiration-387900 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-windows-amd64.exe start -p cert-expiration-387900 --memory=2048 --cert-expiration=8760h --driver=hyperv" : exit status 90
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-387900] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18431
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "cert-expiration-387900" primary control-plane node in "cert-expiration-387900" cluster
	* Updating the running hyperv "cert-expiration-387900" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 13:22:22.961442    4280 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! This VM is having trouble accessing https://registry.k8s.io
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Mar 18 13:17:57 cert-expiration-387900 systemd[1]: Starting Docker Application Container Engine...
	Mar 18 13:17:57 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:17:57.620341409Z" level=info msg="Starting up"
	Mar 18 13:17:57 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:17:57.621542505Z" level=info msg="containerd not running, starting managed containerd"
	Mar 18 13:17:57 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:17:57.623053100Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=671
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.657832886Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.690149779Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.690194679Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.690293679Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.690314079Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.690420078Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.690444878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.691444575Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.691495575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.691578975Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.691599575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.691724274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.692031373Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.695395562Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.695504962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.695825761Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.695922960Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.696035560Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.696264559Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.696356959Z" level=info msg="metadata content store policy set" policy=shared
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.720137881Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.720288880Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.720315580Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.720408180Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.720424380Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.720619579Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721159277Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721308077Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721353277Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721371177Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721386777Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721413177Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721426577Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721441276Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721477676Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721491876Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721505776Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721518476Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721545476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721561876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721641876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721665376Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721680176Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721694676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721709276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721724076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721738876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721755275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721768575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721781875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721795175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721818675Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721842275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721872675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721884675Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721933775Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721973375Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721989475Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.722002875Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.722137074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.722175774Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.722190674Z" level=info msg="NRI interface is disabled by configuration."
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.722600673Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.722670872Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.722744372Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.722766272Z" level=info msg="containerd successfully booted in 0.066468s"
	Mar 18 13:17:58 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:17:58.692577720Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Mar 18 13:17:58 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:17:58.733722894Z" level=info msg="Loading containers: start."
	Mar 18 13:17:59 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:17:59.052398798Z" level=info msg="Loading containers: done."
	Mar 18 13:17:59 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:17:59.081006743Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	Mar 18 13:17:59 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:17:59.081318551Z" level=info msg="Daemon has completed initialization"
	Mar 18 13:17:59 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:17:59.210511613Z" level=info msg="API listen on /var/run/docker.sock"
	Mar 18 13:17:59 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:17:59.210562115Z" level=info msg="API listen on [::]:2376"
	Mar 18 13:17:59 cert-expiration-387900 systemd[1]: Started Docker Application Container Engine.
	Mar 18 13:18:32 cert-expiration-387900 systemd[1]: Stopping Docker Application Container Engine...
	Mar 18 13:18:32 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:18:32.181291454Z" level=info msg="Processing signal 'terminated'"
	Mar 18 13:18:32 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:18:32.183919460Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Mar 18 13:18:32 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:18:32.184895463Z" level=info msg="Daemon shutdown complete"
	Mar 18 13:18:32 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:18:32.185091563Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Mar 18 13:18:32 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:18:32.185188264Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Mar 18 13:18:33 cert-expiration-387900 systemd[1]: docker.service: Deactivated successfully.
	Mar 18 13:18:33 cert-expiration-387900 systemd[1]: Stopped Docker Application Container Engine.
	Mar 18 13:18:33 cert-expiration-387900 systemd[1]: Starting Docker Application Container Engine...
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:33.276481389Z" level=info msg="Starting up"
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:33.277615492Z" level=info msg="containerd not running, starting managed containerd"
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:33.279088195Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1027
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.312376178Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.344841260Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.345000860Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.345090960Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.345168160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.345217960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.345234861Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.345473461Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.345595261Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.345634462Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.345645162Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.345671662Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.345833962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.349587271Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.349712772Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.349891672Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.349989072Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.350071173Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.350172773Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.350188773Z" level=info msg="metadata content store policy set" policy=shared
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.350532074Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.350654974Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.350696274Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.350714374Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.350736774Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.350790774Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351236076Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351387476Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351410876Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351427276Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351446376Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351482776Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351498476Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351610176Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351867177Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351892377Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351914477Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351929677Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351952677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351970077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351984977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352001377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352056478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352094578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352109978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352125278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352193278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352222278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352237478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352251978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352266878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352290178Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352330678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352345578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352358878Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352408878Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352451479Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352466579Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352479079Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352560879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352601679Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352619679Z" level=info msg="NRI interface is disabled by configuration."
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352949180Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.353334681Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.353417081Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.353477481Z" level=info msg="containerd successfully booted in 0.042179s"
	Mar 18 13:18:34 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:34.322595801Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Mar 18 13:18:34 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:34.347570064Z" level=info msg="Loading containers: start."
	Mar 18 13:18:34 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:34.551505373Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Mar 18 13:18:34 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:34.641652798Z" level=info msg="Loading containers: done."
	Mar 18 13:18:34 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:34.666864961Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	Mar 18 13:18:34 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:34.667108862Z" level=info msg="Daemon has completed initialization"
	Mar 18 13:18:34 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:34.713775878Z" level=info msg="API listen on /var/run/docker.sock"
	Mar 18 13:18:34 cert-expiration-387900 systemd[1]: Started Docker Application Container Engine.
	Mar 18 13:18:34 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:34.716481285Z" level=info msg="API listen on [::]:2376"
	Mar 18 13:18:49 cert-expiration-387900 systemd[1]: Stopping Docker Application Container Engine...
	Mar 18 13:18:49 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:49.752791635Z" level=info msg="Processing signal 'terminated'"
	Mar 18 13:18:50 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:50.106980920Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Mar 18 13:18:50 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:50.107337821Z" level=info msg="Daemon shutdown complete"
	Mar 18 13:18:50 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:50.107570521Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Mar 18 13:18:50 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:50.107792222Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Mar 18 13:18:51 cert-expiration-387900 systemd[1]: docker.service: Deactivated successfully.
	Mar 18 13:18:51 cert-expiration-387900 systemd[1]: Stopped Docker Application Container Engine.
	Mar 18 13:18:51 cert-expiration-387900 systemd[1]: Starting Docker Application Container Engine...
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:18:51.195324938Z" level=info msg="Starting up"
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:18:51.196285540Z" level=info msg="containerd not running, starting managed containerd"
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:18:51.197688743Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1343
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.230756526Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.264354710Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.264489510Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.264547110Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.264566011Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.264600411Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.264618211Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.264832311Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.264943011Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.264967612Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.264980712Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.265060412Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.265472713Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.270127524Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.270252525Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.270758126Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.270859926Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.270903626Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.270925426Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.270939326Z" level=info msg="metadata content store policy set" policy=shared
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.271178427Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.271343627Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.271370027Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.271390828Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.271408628Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.271465328Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.271807529Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.271966329Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272113529Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272144429Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272161229Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272231630Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272260230Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272277330Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272294130Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272317730Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272335030Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272348630Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272372330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272388530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272405830Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272421430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272435730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272450430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272464330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272479530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272494930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272512730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272527230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272542030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272556430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272574231Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272597731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272616631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272632431Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272675231Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272693631Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272706731Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272720331Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272803831Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272827531Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272841031Z" level=info msg="NRI interface is disabled by configuration."
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.273144732Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.273294232Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.273441333Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.273545533Z" level=info msg="containerd successfully booted in 0.044143s"
	Mar 18 13:18:52 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:18:52.245630384Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Mar 18 13:18:52 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:18:52.478605590Z" level=info msg="Loading containers: start."
	Mar 18 13:18:52 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:18:52.713446595Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Mar 18 13:18:52 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:18:52.800966497Z" level=info msg="Loading containers: done."
	Mar 18 13:18:52 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:18:52.831475898Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	Mar 18 13:18:52 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:18:52.831691298Z" level=info msg="Daemon has completed initialization"
	Mar 18 13:18:52 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:18:52.874478599Z" level=info msg="API listen on /var/run/docker.sock"
	Mar 18 13:18:52 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:18:52.874579399Z" level=info msg="API listen on [::]:2376"
	Mar 18 13:18:52 cert-expiration-387900 systemd[1]: Started Docker Application Container Engine.
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.589006262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.589151162Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.589164362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.589375062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.647630027Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.647745427Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.647765827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.647942028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.768505762Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.769533963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.769960563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.770878864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.780983976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.781465676Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.781487576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.781599776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.140848788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.141565289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.141721590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.142141690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.191467349Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.191729350Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.191814250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.192096150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.342914431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.343085831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.343107131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.343615832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.416665819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.416953120Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.417357520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.418664122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:23 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:23.904111013Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 13:19:23 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:23.906724319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 13:19:23 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:23.906845519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:23 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:23.907431320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.004985431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.005453432Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.005591033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.006078534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.299992676Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.301494179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.302176280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.303120082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.488396889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.488470289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.488488889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.488735390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.765917699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.765987599Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.766001799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.766167600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.961269829Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.961429429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.961467630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.961692730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:27:36 cert-expiration-387900 systemd[1]: Stopping Docker Application Container Engine...
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:36.330645953Z" level=info msg="Processing signal 'terminated'"
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:36.586138504Z" level=info msg="ignoring event" container=96fd0abbc72f21ee1c7509c9acebc203af33ad25e6adecb6e2152237e009d579 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.588066411Z" level=info msg="shim disconnected" id=96fd0abbc72f21ee1c7509c9acebc203af33ad25e6adecb6e2152237e009d579 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.589278815Z" level=warning msg="cleaning up after shim disconnected" id=96fd0abbc72f21ee1c7509c9acebc203af33ad25e6adecb6e2152237e009d579 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.589367916Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:36.635038386Z" level=info msg="ignoring event" container=c1007e91b363d1dacba626aad982b9a5de8bada51e5670fa92c7d13946c79d49 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.636093590Z" level=info msg="shim disconnected" id=c1007e91b363d1dacba626aad982b9a5de8bada51e5670fa92c7d13946c79d49 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.637171494Z" level=warning msg="cleaning up after shim disconnected" id=c1007e91b363d1dacba626aad982b9a5de8bada51e5670fa92c7d13946c79d49 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.637326694Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:36.639408502Z" level=info msg="ignoring event" container=4cced86914b298bd779418f3daf0d6e310a61bd484ebb1279735b35cdba6da74 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.640554006Z" level=info msg="shim disconnected" id=4cced86914b298bd779418f3daf0d6e310a61bd484ebb1279735b35cdba6da74 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.640703307Z" level=warning msg="cleaning up after shim disconnected" id=4cced86914b298bd779418f3daf0d6e310a61bd484ebb1279735b35cdba6da74 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.640840607Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:36.662626788Z" level=info msg="ignoring event" container=b6c5311bd5083719cc6fadcb45665e0b05f1bbd649ed570184de785450c69a83 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.664633096Z" level=info msg="shim disconnected" id=b6c5311bd5083719cc6fadcb45665e0b05f1bbd649ed570184de785450c69a83 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.665045997Z" level=warning msg="cleaning up after shim disconnected" id=b6c5311bd5083719cc6fadcb45665e0b05f1bbd649ed570184de785450c69a83 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.665164998Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.681329058Z" level=info msg="shim disconnected" id=4cce9ff9cd0b8ae48ebe643ac4ca60469f97cee70bccacc419be12f89fd1473e namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.681405058Z" level=warning msg="cleaning up after shim disconnected" id=4cce9ff9cd0b8ae48ebe643ac4ca60469f97cee70bccacc419be12f89fd1473e namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.681419358Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:36.681798860Z" level=info msg="ignoring event" container=4cce9ff9cd0b8ae48ebe643ac4ca60469f97cee70bccacc419be12f89fd1473e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.712509774Z" level=info msg="shim disconnected" id=f7ad92eae7109ce0c7b18dd4d8fe5f5ca578a22ad5b4e527947b9c3a1575e380 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.713396177Z" level=warning msg="cleaning up after shim disconnected" id=f7ad92eae7109ce0c7b18dd4d8fe5f5ca578a22ad5b4e527947b9c3a1575e380 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.713579378Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:36.720299403Z" level=info msg="ignoring event" container=a2a9b74dae2a1b6089bb6b5d372c56e9962447766eb9bc2ac3b5ce33da4d58ae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:36.720502803Z" level=info msg="ignoring event" container=f7ad92eae7109ce0c7b18dd4d8fe5f5ca578a22ad5b4e527947b9c3a1575e380 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:36.720616804Z" level=info msg="ignoring event" container=317ae2407d998077804175f5f80858ac9fc790baa6bbc8cf13cdcbfa716735e9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.726836427Z" level=info msg="shim disconnected" id=317ae2407d998077804175f5f80858ac9fc790baa6bbc8cf13cdcbfa716735e9 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.727084528Z" level=warning msg="cleaning up after shim disconnected" id=317ae2407d998077804175f5f80858ac9fc790baa6bbc8cf13cdcbfa716735e9 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.727283929Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:36.748058206Z" level=info msg="ignoring event" container=eab2cbbcda48a79953a341e7aa0076e7ca044d2c21b0f0258fb95d61a7503f39 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.748532408Z" level=info msg="shim disconnected" id=eab2cbbcda48a79953a341e7aa0076e7ca044d2c21b0f0258fb95d61a7503f39 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:36.752572823Z" level=info msg="ignoring event" container=3796fb6d711c31c1713a8e64c2d103ba70403de92dc92ac969dad35e8f99a0f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.754433530Z" level=warning msg="cleaning up after shim disconnected" id=eab2cbbcda48a79953a341e7aa0076e7ca044d2c21b0f0258fb95d61a7503f39 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.757317740Z" level=info msg="shim disconnected" id=f47267364955e899eaab5a77e8c509601f8ce110d2a5c9a89d381014e55530cb namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:36.766544075Z" level=info msg="ignoring event" container=f47267364955e899eaab5a77e8c509601f8ce110d2a5c9a89d381014e55530cb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.770650690Z" level=warning msg="cleaning up after shim disconnected" id=f47267364955e899eaab5a77e8c509601f8ce110d2a5c9a89d381014e55530cb namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.770997291Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.766175973Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.749672112Z" level=info msg="shim disconnected" id=a2a9b74dae2a1b6089bb6b5d372c56e9962447766eb9bc2ac3b5ce33da4d58ae namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.790886365Z" level=warning msg="cleaning up after shim disconnected" id=a2a9b74dae2a1b6089bb6b5d372c56e9962447766eb9bc2ac3b5ce33da4d58ae namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.790911665Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.754349629Z" level=info msg="shim disconnected" id=3796fb6d711c31c1713a8e64c2d103ba70403de92dc92ac969dad35e8f99a0f3 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.798506394Z" level=warning msg="cleaning up after shim disconnected" id=3796fb6d711c31c1713a8e64c2d103ba70403de92dc92ac969dad35e8f99a0f3 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.798581694Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.860606425Z" level=info msg="shim disconnected" id=b4bcc982a59e70008c9f4f9f8323921cfa098f71240b74a76cacf79fe5f26311 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:36.862693732Z" level=info msg="ignoring event" container=b4bcc982a59e70008c9f4f9f8323921cfa098f71240b74a76cacf79fe5f26311 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.862864633Z" level=warning msg="cleaning up after shim disconnected" id=b4bcc982a59e70008c9f4f9f8323921cfa098f71240b74a76cacf79fe5f26311 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.863100134Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 13:27:41 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:41.533158805Z" level=info msg="ignoring event" container=6a7c758b23ead7637582c14d5058f8d4a068f024688bd1fbb8265e4b2711b55b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 13:27:41 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:41.534832011Z" level=info msg="shim disconnected" id=6a7c758b23ead7637582c14d5058f8d4a068f024688bd1fbb8265e4b2711b55b namespace=moby
	Mar 18 13:27:41 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:41.534930812Z" level=warning msg="cleaning up after shim disconnected" id=6a7c758b23ead7637582c14d5058f8d4a068f024688bd1fbb8265e4b2711b55b namespace=moby
	Mar 18 13:27:41 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:41.534946712Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 13:27:46 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:46.447248733Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=2a4e70b9f61c04bd03ed464ddc66786513fe7a5fabcaf411974aebc70a30fa21
	Mar 18 13:27:46 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:46.506990536Z" level=info msg="ignoring event" container=2a4e70b9f61c04bd03ed464ddc66786513fe7a5fabcaf411974aebc70a30fa21 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 13:27:46 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:46.507485644Z" level=info msg="shim disconnected" id=2a4e70b9f61c04bd03ed464ddc66786513fe7a5fabcaf411974aebc70a30fa21 namespace=moby
	Mar 18 13:27:46 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:46.507589846Z" level=warning msg="cleaning up after shim disconnected" id=2a4e70b9f61c04bd03ed464ddc66786513fe7a5fabcaf411974aebc70a30fa21 namespace=moby
	Mar 18 13:27:46 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:46.507610146Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 13:27:46 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:46.530843097Z" level=warning msg="cleanup warnings time=\"2024-03-18T13:27:46Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Mar 18 13:27:46 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:46.594849465Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Mar 18 13:27:46 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:46.595644777Z" level=info msg="Daemon shutdown complete"
	Mar 18 13:27:46 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:46.595966682Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Mar 18 13:27:46 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:46.596396089Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Mar 18 13:27:47 cert-expiration-387900 systemd[1]: docker.service: Deactivated successfully.
	Mar 18 13:27:47 cert-expiration-387900 systemd[1]: Stopped Docker Application Container Engine.
	Mar 18 13:27:47 cert-expiration-387900 systemd[1]: docker.service: Consumed 19.309s CPU time.
	Mar 18 13:27:47 cert-expiration-387900 systemd[1]: Starting Docker Application Container Engine...
	Mar 18 13:27:47 cert-expiration-387900 dockerd[7408]: time="2024-03-18T13:27:47.716462627Z" level=info msg="Starting up"
	Mar 18 13:28:47 cert-expiration-387900 dockerd[7408]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Mar 18 13:28:47 cert-expiration-387900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Mar 18 13:28:47 cert-expiration-387900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Mar 18 13:28:47 cert-expiration-387900 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-03-18 13:28:48.1250315 +0000 UTC m=+10447.334280701
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p cert-expiration-387900 -n cert-expiration-387900
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p cert-expiration-387900 -n cert-expiration-387900: exit status 2 (13.7017373s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 13:28:48.269170   12724 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestCertExpiration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestCertExpiration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-expiration-387900 logs -n 25
E0318 13:29:49.654076    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p cert-expiration-387900 logs -n 25: (2m47.0616162s)
helpers_test.go:252: TestCertExpiration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|-------------------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |       User        | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|-------------------|---------|---------------------|---------------------|
	| ssh     | -p cilium-767100 sudo                                | cilium-767100 | minikube6\jenkins | v1.32.0 | 18 Mar 24 13:28 UTC |                     |
	|         | systemctl status kubelet --all                       |               |                   |         |                     |                     |
	|         | --full --no-pager                                    |               |                   |         |                     |                     |
	| ssh     | -p cilium-767100 sudo                                | cilium-767100 | minikube6\jenkins | v1.32.0 | 18 Mar 24 13:28 UTC |                     |
	|         | systemctl cat kubelet                                |               |                   |         |                     |                     |
	|         | --no-pager                                           |               |                   |         |                     |                     |
	| ssh     | -p cilium-767100 sudo                                | cilium-767100 | minikube6\jenkins | v1.32.0 | 18 Mar 24 13:28 UTC |                     |
	|         | journalctl -xeu kubelet --all                        |               |                   |         |                     |                     |
	|         | --full --no-pager                                    |               |                   |         |                     |                     |
	| ssh     | -p cilium-767100 sudo cat                            | cilium-767100 | minikube6\jenkins | v1.32.0 | 18 Mar 24 13:28 UTC |                     |
	|         | /etc/kubernetes/kubelet.conf                         |               |                   |         |                     |                     |
	| ssh     | -p cilium-767100 sudo cat                            | cilium-767100 | minikube6\jenkins | v1.32.0 | 18 Mar 24 13:28 UTC |                     |
	|         | /var/lib/kubelet/config.yaml                         |               |                   |         |                     |                     |
	| ssh     | -p cilium-767100 sudo                                | cilium-767100 | minikube6\jenkins | v1.32.0 | 18 Mar 24 13:28 UTC |                     |
	|         | systemctl status docker --all                        |               |                   |         |                     |                     |
	|         | --full --no-pager                                    |               |                   |         |                     |                     |
	| ssh     | -p cilium-767100 sudo                                | cilium-767100 | minikube6\jenkins | v1.32.0 | 18 Mar 24 13:28 UTC |                     |
	|         | systemctl cat docker                                 |               |                   |         |                     |                     |
	|         | --no-pager                                           |               |                   |         |                     |                     |
	| ssh     | -p cilium-767100 sudo cat                            | cilium-767100 | minikube6\jenkins | v1.32.0 | 18 Mar 24 13:28 UTC |                     |
	|         | /etc/docker/daemon.json                              |               |                   |         |                     |                     |
	| ssh     | -p cilium-767100 sudo docker                         | cilium-767100 | minikube6\jenkins | v1.32.0 | 18 Mar 24 13:28 UTC |                     |
	|         | system info                                          |               |                   |         |                     |                     |
	| ssh     | -p cilium-767100 sudo                                | cilium-767100 | minikube6\jenkins | v1.32.0 | 18 Mar 24 13:28 UTC |                     |
	|         | systemctl status cri-docker                          |               |                   |         |                     |                     |
	|         | --all --full --no-pager                              |               |                   |         |                     |                     |
	| ssh     | -p cilium-767100 sudo                                | cilium-767100 | minikube6\jenkins | v1.32.0 | 18 Mar 24 13:28 UTC |                     |
	|         | systemctl cat cri-docker                             |               |                   |         |                     |                     |
	|         | --no-pager                                           |               |                   |         |                     |                     |
	| ssh     | -p cilium-767100 sudo cat                            | cilium-767100 | minikube6\jenkins | v1.32.0 | 18 Mar 24 13:28 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |                   |         |                     |                     |
	| ssh     | -p cilium-767100 sudo cat                            | cilium-767100 | minikube6\jenkins | v1.32.0 | 18 Mar 24 13:28 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |                   |         |                     |                     |
	| ssh     | -p cilium-767100 sudo                                | cilium-767100 | minikube6\jenkins | v1.32.0 | 18 Mar 24 13:28 UTC |                     |
	|         | cri-dockerd --version                                |               |                   |         |                     |                     |
	| ssh     | -p cilium-767100 sudo                                | cilium-767100 | minikube6\jenkins | v1.32.0 | 18 Mar 24 13:28 UTC |                     |
	|         | systemctl status containerd                          |               |                   |         |                     |                     |
	|         | --all --full --no-pager                              |               |                   |         |                     |                     |
	| ssh     | -p cilium-767100 sudo                                | cilium-767100 | minikube6\jenkins | v1.32.0 | 18 Mar 24 13:28 UTC |                     |
	|         | systemctl cat containerd                             |               |                   |         |                     |                     |
	|         | --no-pager                                           |               |                   |         |                     |                     |
	| ssh     | -p cilium-767100 sudo cat                            | cilium-767100 | minikube6\jenkins | v1.32.0 | 18 Mar 24 13:28 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |               |                   |         |                     |                     |
	| ssh     | -p cilium-767100 sudo cat                            | cilium-767100 | minikube6\jenkins | v1.32.0 | 18 Mar 24 13:28 UTC |                     |
	|         | /etc/containerd/config.toml                          |               |                   |         |                     |                     |
	| ssh     | -p cilium-767100 sudo                                | cilium-767100 | minikube6\jenkins | v1.32.0 | 18 Mar 24 13:28 UTC |                     |
	|         | containerd config dump                               |               |                   |         |                     |                     |
	| ssh     | -p cilium-767100 sudo                                | cilium-767100 | minikube6\jenkins | v1.32.0 | 18 Mar 24 13:28 UTC |                     |
	|         | systemctl status crio --all                          |               |                   |         |                     |                     |
	|         | --full --no-pager                                    |               |                   |         |                     |                     |
	| ssh     | -p cilium-767100 sudo                                | cilium-767100 | minikube6\jenkins | v1.32.0 | 18 Mar 24 13:28 UTC |                     |
	|         | systemctl cat crio --no-pager                        |               |                   |         |                     |                     |
	| ssh     | -p cilium-767100 sudo find                           | cilium-767100 | minikube6\jenkins | v1.32.0 | 18 Mar 24 13:28 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |               |                   |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |                   |         |                     |                     |
	| ssh     | -p cilium-767100 sudo crio                           | cilium-767100 | minikube6\jenkins | v1.32.0 | 18 Mar 24 13:28 UTC |                     |
	|         | config                                               |               |                   |         |                     |                     |
	| delete  | -p cilium-767100                                     | cilium-767100 | minikube6\jenkins | v1.32.0 | 18 Mar 24 13:28 UTC | 18 Mar 24 13:28 UTC |
	| start   | -p pause-320800 --memory=2048                        | pause-320800  | minikube6\jenkins | v1.32.0 | 18 Mar 24 13:28 UTC |                     |
	|         | --install-addons=false                               |               |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                           |               |                   |         |                     |                     |
	|---------|------------------------------------------------------|---------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 13:28:40
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 13:28:40.523913   12900 out.go:291] Setting OutFile to fd 1708 ...
	I0318 13:28:40.523913   12900 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:28:40.523913   12900 out.go:304] Setting ErrFile to fd 1128...
	I0318 13:28:40.523913   12900 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:28:40.553487   12900 out.go:298] Setting JSON to false
	I0318 13:28:40.558051   12900 start.go:129] hostinfo: {"hostname":"minikube6","uptime":143844,"bootTime":1710624675,"procs":205,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0318 13:28:40.558299   12900 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:28:40.561775   12900 out.go:177] * [pause-320800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	I0318 13:28:40.570489   12900 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0318 13:28:40.570489   12900 notify.go:220] Checking for updates...
	I0318 13:28:40.574518   12900 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:28:40.577701   12900 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0318 13:28:40.582820   12900 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 13:28:40.584491   12900 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:28:40.714480    8996 main.go:141] libmachine: [stdout =====>] : 172.25.146.81
	
	I0318 13:28:40.714480    8996 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:28:40.723592    8996 main.go:141] libmachine: Using SSH client type: native
	I0318 13:28:40.724103    8996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.146.81 22 <nil> <nil>}
	I0318 13:28:40.724183    8996 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0318 13:28:40.893896    8996 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0318 13:28:40.893896    8996 buildroot.go:70] root file system type: tmpfs
	I0318 13:28:40.894172    8996 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0318 13:28:40.894300    8996 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-437700 ).state
	I0318 13:28:43.192839    8996 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:28:43.193492    8996 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:28:43.193492    8996 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-437700 ).networkadapters[0]).ipaddresses[0]
	I0318 13:28:40.588121   12900 config.go:182] Loaded profile config "cert-expiration-387900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:28:40.589104   12900 config.go:182] Loaded profile config "ha-606900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:28:40.589104   12900 config.go:182] Loaded profile config "kubernetes-upgrade-340000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0318 13:28:40.590132   12900 config.go:182] Loaded profile config "stopped-upgrade-437700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 13:28:40.590132   12900 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:28:46.360330   12900 out.go:177] * Using the hyperv driver based on user configuration
	I0318 13:28:46.365593   12900 start.go:297] selected driver: hyperv
	I0318 13:28:46.365593   12900 start.go:901] validating driver "hyperv" against <nil>
	I0318 13:28:46.365593   12900 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:28:46.423948   12900 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 13:28:46.425471   12900 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:28:46.425596   12900 cni.go:84] Creating CNI manager for ""
	I0318 13:28:46.425596   12900 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 13:28:46.425596   12900 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 13:28:46.425768   12900 start.go:340] cluster config:
	{Name:pause-320800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:pause-320800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:28:46.425978   12900 iso.go:125] acquiring lock: {Name:mk859ea173f7c19f70b69d7017f4a5a661cd1500 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:28:46.430685   12900 out.go:177] * Starting "pause-320800" primary control-plane node in "pause-320800" cluster
	I0318 13:28:47.748543    4280 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.4490953s)
	I0318 13:28:47.764113    4280 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0318 13:28:47.835353    4280 out.go:177] 
	W0318 13:28:47.838740    4280 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Mar 18 13:17:57 cert-expiration-387900 systemd[1]: Starting Docker Application Container Engine...
	Mar 18 13:17:57 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:17:57.620341409Z" level=info msg="Starting up"
	Mar 18 13:17:57 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:17:57.621542505Z" level=info msg="containerd not running, starting managed containerd"
	Mar 18 13:17:57 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:17:57.623053100Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=671
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.657832886Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.690149779Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.690194679Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.690293679Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.690314079Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.690420078Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.690444878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.691444575Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.691495575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.691578975Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.691599575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.691724274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.692031373Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.695395562Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.695504962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.695825761Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.695922960Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.696035560Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.696264559Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.696356959Z" level=info msg="metadata content store policy set" policy=shared
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.720137881Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.720288880Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.720315580Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.720408180Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.720424380Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.720619579Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721159277Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721308077Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721353277Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721371177Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721386777Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721413177Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721426577Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721441276Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721477676Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721491876Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721505776Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721518476Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721545476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721561876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721641876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721665376Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721680176Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721694676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721709276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721724076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721738876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721755275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721768575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721781875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721795175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721818675Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721842275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721872675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721884675Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721933775Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721973375Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.721989475Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.722002875Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.722137074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.722175774Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.722190674Z" level=info msg="NRI interface is disabled by configuration."
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.722600673Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.722670872Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.722744372Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Mar 18 13:17:57 cert-expiration-387900 dockerd[671]: time="2024-03-18T13:17:57.722766272Z" level=info msg="containerd successfully booted in 0.066468s"
	Mar 18 13:17:58 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:17:58.692577720Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Mar 18 13:17:58 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:17:58.733722894Z" level=info msg="Loading containers: start."
	Mar 18 13:17:59 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:17:59.052398798Z" level=info msg="Loading containers: done."
	Mar 18 13:17:59 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:17:59.081006743Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	Mar 18 13:17:59 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:17:59.081318551Z" level=info msg="Daemon has completed initialization"
	Mar 18 13:17:59 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:17:59.210511613Z" level=info msg="API listen on /var/run/docker.sock"
	Mar 18 13:17:59 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:17:59.210562115Z" level=info msg="API listen on [::]:2376"
	Mar 18 13:17:59 cert-expiration-387900 systemd[1]: Started Docker Application Container Engine.
	Mar 18 13:18:32 cert-expiration-387900 systemd[1]: Stopping Docker Application Container Engine...
	Mar 18 13:18:32 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:18:32.181291454Z" level=info msg="Processing signal 'terminated'"
	Mar 18 13:18:32 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:18:32.183919460Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Mar 18 13:18:32 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:18:32.184895463Z" level=info msg="Daemon shutdown complete"
	Mar 18 13:18:32 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:18:32.185091563Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Mar 18 13:18:32 cert-expiration-387900 dockerd[665]: time="2024-03-18T13:18:32.185188264Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Mar 18 13:18:33 cert-expiration-387900 systemd[1]: docker.service: Deactivated successfully.
	Mar 18 13:18:33 cert-expiration-387900 systemd[1]: Stopped Docker Application Container Engine.
	Mar 18 13:18:33 cert-expiration-387900 systemd[1]: Starting Docker Application Container Engine...
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:33.276481389Z" level=info msg="Starting up"
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:33.277615492Z" level=info msg="containerd not running, starting managed containerd"
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:33.279088195Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1027
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.312376178Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.344841260Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.345000860Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.345090960Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.345168160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.345217960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.345234861Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.345473461Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.345595261Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.345634462Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.345645162Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.345671662Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.345833962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.349587271Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.349712772Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.349891672Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.349989072Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.350071173Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.350172773Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.350188773Z" level=info msg="metadata content store policy set" policy=shared
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.350532074Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.350654974Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.350696274Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.350714374Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.350736774Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.350790774Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351236076Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351387476Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351410876Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351427276Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351446376Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351482776Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351498476Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351610176Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351867177Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351892377Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351914477Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351929677Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351952677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351970077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.351984977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352001377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352056478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352094578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352109978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352125278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352193278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352222278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352237478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352251978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352266878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352290178Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352330678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352345578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352358878Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352408878Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352451479Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352466579Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352479079Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352560879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352601679Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352619679Z" level=info msg="NRI interface is disabled by configuration."
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.352949180Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.353334681Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.353417081Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Mar 18 13:18:33 cert-expiration-387900 dockerd[1027]: time="2024-03-18T13:18:33.353477481Z" level=info msg="containerd successfully booted in 0.042179s"
	Mar 18 13:18:34 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:34.322595801Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Mar 18 13:18:34 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:34.347570064Z" level=info msg="Loading containers: start."
	Mar 18 13:18:34 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:34.551505373Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Mar 18 13:18:34 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:34.641652798Z" level=info msg="Loading containers: done."
	Mar 18 13:18:34 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:34.666864961Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	Mar 18 13:18:34 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:34.667108862Z" level=info msg="Daemon has completed initialization"
	Mar 18 13:18:34 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:34.713775878Z" level=info msg="API listen on /var/run/docker.sock"
	Mar 18 13:18:34 cert-expiration-387900 systemd[1]: Started Docker Application Container Engine.
	Mar 18 13:18:34 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:34.716481285Z" level=info msg="API listen on [::]:2376"
	Mar 18 13:18:49 cert-expiration-387900 systemd[1]: Stopping Docker Application Container Engine...
	Mar 18 13:18:49 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:49.752791635Z" level=info msg="Processing signal 'terminated'"
	Mar 18 13:18:50 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:50.106980920Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Mar 18 13:18:50 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:50.107337821Z" level=info msg="Daemon shutdown complete"
	Mar 18 13:18:50 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:50.107570521Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Mar 18 13:18:50 cert-expiration-387900 dockerd[1021]: time="2024-03-18T13:18:50.107792222Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Mar 18 13:18:51 cert-expiration-387900 systemd[1]: docker.service: Deactivated successfully.
	Mar 18 13:18:51 cert-expiration-387900 systemd[1]: Stopped Docker Application Container Engine.
	Mar 18 13:18:51 cert-expiration-387900 systemd[1]: Starting Docker Application Container Engine...
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:18:51.195324938Z" level=info msg="Starting up"
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:18:51.196285540Z" level=info msg="containerd not running, starting managed containerd"
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:18:51.197688743Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1343
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.230756526Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.264354710Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.264489510Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.264547110Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.264566011Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.264600411Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.264618211Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.264832311Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.264943011Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.264967612Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.264980712Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.265060412Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.265472713Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.270127524Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.270252525Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.270758126Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.270859926Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.270903626Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.270925426Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.270939326Z" level=info msg="metadata content store policy set" policy=shared
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.271178427Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.271343627Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.271370027Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.271390828Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.271408628Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.271465328Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.271807529Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.271966329Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272113529Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272144429Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272161229Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272231630Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272260230Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272277330Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272294130Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272317730Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272335030Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272348630Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272372330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272388530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272405830Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272421430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272435730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272450430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272464330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272479530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272494930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272512730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272527230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272542030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272556430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272574231Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272597731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272616631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272632431Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272675231Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272693631Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272706731Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272720331Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272803831Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272827531Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.272841031Z" level=info msg="NRI interface is disabled by configuration."
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.273144732Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.273294232Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.273441333Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Mar 18 13:18:51 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:18:51.273545533Z" level=info msg="containerd successfully booted in 0.044143s"
	Mar 18 13:18:52 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:18:52.245630384Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Mar 18 13:18:52 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:18:52.478605590Z" level=info msg="Loading containers: start."
	Mar 18 13:18:52 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:18:52.713446595Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Mar 18 13:18:52 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:18:52.800966497Z" level=info msg="Loading containers: done."
	Mar 18 13:18:52 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:18:52.831475898Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	Mar 18 13:18:52 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:18:52.831691298Z" level=info msg="Daemon has completed initialization"
	Mar 18 13:18:52 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:18:52.874478599Z" level=info msg="API listen on /var/run/docker.sock"
	Mar 18 13:18:52 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:18:52.874579399Z" level=info msg="API listen on [::]:2376"
	Mar 18 13:18:52 cert-expiration-387900 systemd[1]: Started Docker Application Container Engine.
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.589006262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.589151162Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.589164362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.589375062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.647630027Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.647745427Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.647765827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.647942028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.768505762Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.769533963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.769960563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.770878864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.780983976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.781465676Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.781487576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:01 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:01.781599776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.140848788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.141565289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.141721590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.142141690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.191467349Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.191729350Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.191814250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.192096150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.342914431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.343085831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.343107131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.343615832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.416665819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.416953120Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.417357520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:02 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:02.418664122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:23 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:23.904111013Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 13:19:23 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:23.906724319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 13:19:23 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:23.906845519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:23 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:23.907431320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.004985431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.005453432Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.005591033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.006078534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.299992676Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.301494179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.302176280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.303120082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.488396889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.488470289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.488488889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.488735390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.765917699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.765987599Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.766001799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.766167600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.961269829Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.961429429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.961467630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:19:24 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:19:24.961692730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 13:27:36 cert-expiration-387900 systemd[1]: Stopping Docker Application Container Engine...
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:36.330645953Z" level=info msg="Processing signal 'terminated'"
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:36.586138504Z" level=info msg="ignoring event" container=96fd0abbc72f21ee1c7509c9acebc203af33ad25e6adecb6e2152237e009d579 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.588066411Z" level=info msg="shim disconnected" id=96fd0abbc72f21ee1c7509c9acebc203af33ad25e6adecb6e2152237e009d579 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.589278815Z" level=warning msg="cleaning up after shim disconnected" id=96fd0abbc72f21ee1c7509c9acebc203af33ad25e6adecb6e2152237e009d579 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.589367916Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:36.635038386Z" level=info msg="ignoring event" container=c1007e91b363d1dacba626aad982b9a5de8bada51e5670fa92c7d13946c79d49 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.636093590Z" level=info msg="shim disconnected" id=c1007e91b363d1dacba626aad982b9a5de8bada51e5670fa92c7d13946c79d49 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.637171494Z" level=warning msg="cleaning up after shim disconnected" id=c1007e91b363d1dacba626aad982b9a5de8bada51e5670fa92c7d13946c79d49 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.637326694Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:36.639408502Z" level=info msg="ignoring event" container=4cced86914b298bd779418f3daf0d6e310a61bd484ebb1279735b35cdba6da74 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.640554006Z" level=info msg="shim disconnected" id=4cced86914b298bd779418f3daf0d6e310a61bd484ebb1279735b35cdba6da74 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.640703307Z" level=warning msg="cleaning up after shim disconnected" id=4cced86914b298bd779418f3daf0d6e310a61bd484ebb1279735b35cdba6da74 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.640840607Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:36.662626788Z" level=info msg="ignoring event" container=b6c5311bd5083719cc6fadcb45665e0b05f1bbd649ed570184de785450c69a83 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.664633096Z" level=info msg="shim disconnected" id=b6c5311bd5083719cc6fadcb45665e0b05f1bbd649ed570184de785450c69a83 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.665045997Z" level=warning msg="cleaning up after shim disconnected" id=b6c5311bd5083719cc6fadcb45665e0b05f1bbd649ed570184de785450c69a83 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.665164998Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.681329058Z" level=info msg="shim disconnected" id=4cce9ff9cd0b8ae48ebe643ac4ca60469f97cee70bccacc419be12f89fd1473e namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.681405058Z" level=warning msg="cleaning up after shim disconnected" id=4cce9ff9cd0b8ae48ebe643ac4ca60469f97cee70bccacc419be12f89fd1473e namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.681419358Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:36.681798860Z" level=info msg="ignoring event" container=4cce9ff9cd0b8ae48ebe643ac4ca60469f97cee70bccacc419be12f89fd1473e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.712509774Z" level=info msg="shim disconnected" id=f7ad92eae7109ce0c7b18dd4d8fe5f5ca578a22ad5b4e527947b9c3a1575e380 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.713396177Z" level=warning msg="cleaning up after shim disconnected" id=f7ad92eae7109ce0c7b18dd4d8fe5f5ca578a22ad5b4e527947b9c3a1575e380 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.713579378Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:36.720299403Z" level=info msg="ignoring event" container=a2a9b74dae2a1b6089bb6b5d372c56e9962447766eb9bc2ac3b5ce33da4d58ae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:36.720502803Z" level=info msg="ignoring event" container=f7ad92eae7109ce0c7b18dd4d8fe5f5ca578a22ad5b4e527947b9c3a1575e380 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:36.720616804Z" level=info msg="ignoring event" container=317ae2407d998077804175f5f80858ac9fc790baa6bbc8cf13cdcbfa716735e9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.726836427Z" level=info msg="shim disconnected" id=317ae2407d998077804175f5f80858ac9fc790baa6bbc8cf13cdcbfa716735e9 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.727084528Z" level=warning msg="cleaning up after shim disconnected" id=317ae2407d998077804175f5f80858ac9fc790baa6bbc8cf13cdcbfa716735e9 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.727283929Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:36.748058206Z" level=info msg="ignoring event" container=eab2cbbcda48a79953a341e7aa0076e7ca044d2c21b0f0258fb95d61a7503f39 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.748532408Z" level=info msg="shim disconnected" id=eab2cbbcda48a79953a341e7aa0076e7ca044d2c21b0f0258fb95d61a7503f39 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:36.752572823Z" level=info msg="ignoring event" container=3796fb6d711c31c1713a8e64c2d103ba70403de92dc92ac969dad35e8f99a0f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.754433530Z" level=warning msg="cleaning up after shim disconnected" id=eab2cbbcda48a79953a341e7aa0076e7ca044d2c21b0f0258fb95d61a7503f39 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.757317740Z" level=info msg="shim disconnected" id=f47267364955e899eaab5a77e8c509601f8ce110d2a5c9a89d381014e55530cb namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:36.766544075Z" level=info msg="ignoring event" container=f47267364955e899eaab5a77e8c509601f8ce110d2a5c9a89d381014e55530cb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.770650690Z" level=warning msg="cleaning up after shim disconnected" id=f47267364955e899eaab5a77e8c509601f8ce110d2a5c9a89d381014e55530cb namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.770997291Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.766175973Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.749672112Z" level=info msg="shim disconnected" id=a2a9b74dae2a1b6089bb6b5d372c56e9962447766eb9bc2ac3b5ce33da4d58ae namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.790886365Z" level=warning msg="cleaning up after shim disconnected" id=a2a9b74dae2a1b6089bb6b5d372c56e9962447766eb9bc2ac3b5ce33da4d58ae namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.790911665Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.754349629Z" level=info msg="shim disconnected" id=3796fb6d711c31c1713a8e64c2d103ba70403de92dc92ac969dad35e8f99a0f3 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.798506394Z" level=warning msg="cleaning up after shim disconnected" id=3796fb6d711c31c1713a8e64c2d103ba70403de92dc92ac969dad35e8f99a0f3 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.798581694Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.860606425Z" level=info msg="shim disconnected" id=b4bcc982a59e70008c9f4f9f8323921cfa098f71240b74a76cacf79fe5f26311 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:36.862693732Z" level=info msg="ignoring event" container=b4bcc982a59e70008c9f4f9f8323921cfa098f71240b74a76cacf79fe5f26311 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.862864633Z" level=warning msg="cleaning up after shim disconnected" id=b4bcc982a59e70008c9f4f9f8323921cfa098f71240b74a76cacf79fe5f26311 namespace=moby
	Mar 18 13:27:36 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:36.863100134Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 13:27:41 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:41.533158805Z" level=info msg="ignoring event" container=6a7c758b23ead7637582c14d5058f8d4a068f024688bd1fbb8265e4b2711b55b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 13:27:41 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:41.534832011Z" level=info msg="shim disconnected" id=6a7c758b23ead7637582c14d5058f8d4a068f024688bd1fbb8265e4b2711b55b namespace=moby
	Mar 18 13:27:41 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:41.534930812Z" level=warning msg="cleaning up after shim disconnected" id=6a7c758b23ead7637582c14d5058f8d4a068f024688bd1fbb8265e4b2711b55b namespace=moby
	Mar 18 13:27:41 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:41.534946712Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 13:27:46 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:46.447248733Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=2a4e70b9f61c04bd03ed464ddc66786513fe7a5fabcaf411974aebc70a30fa21
	Mar 18 13:27:46 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:46.506990536Z" level=info msg="ignoring event" container=2a4e70b9f61c04bd03ed464ddc66786513fe7a5fabcaf411974aebc70a30fa21 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 13:27:46 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:46.507485644Z" level=info msg="shim disconnected" id=2a4e70b9f61c04bd03ed464ddc66786513fe7a5fabcaf411974aebc70a30fa21 namespace=moby
	Mar 18 13:27:46 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:46.507589846Z" level=warning msg="cleaning up after shim disconnected" id=2a4e70b9f61c04bd03ed464ddc66786513fe7a5fabcaf411974aebc70a30fa21 namespace=moby
	Mar 18 13:27:46 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:46.507610146Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 13:27:46 cert-expiration-387900 dockerd[1343]: time="2024-03-18T13:27:46.530843097Z" level=warning msg="cleanup warnings time=\"2024-03-18T13:27:46Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Mar 18 13:27:46 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:46.594849465Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Mar 18 13:27:46 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:46.595644777Z" level=info msg="Daemon shutdown complete"
	Mar 18 13:27:46 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:46.595966682Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Mar 18 13:27:46 cert-expiration-387900 dockerd[1337]: time="2024-03-18T13:27:46.596396089Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Mar 18 13:27:47 cert-expiration-387900 systemd[1]: docker.service: Deactivated successfully.
	Mar 18 13:27:47 cert-expiration-387900 systemd[1]: Stopped Docker Application Container Engine.
	Mar 18 13:27:47 cert-expiration-387900 systemd[1]: docker.service: Consumed 19.309s CPU time.
	Mar 18 13:27:47 cert-expiration-387900 systemd[1]: Starting Docker Application Container Engine...
	Mar 18 13:27:47 cert-expiration-387900 dockerd[7408]: time="2024-03-18T13:27:47.716462627Z" level=info msg="Starting up"
	Mar 18 13:28:47 cert-expiration-387900 dockerd[7408]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Mar 18 13:28:47 cert-expiration-387900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Mar 18 13:28:47 cert-expiration-387900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Mar 18 13:28:47 cert-expiration-387900 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0318 13:28:47.839731    4280 out.go:239] * 
	W0318 13:28:47.842366    4280 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:28:47.845524    4280 out.go:177] 
	I0318 13:28:45.942518    8996 main.go:141] libmachine: [stdout =====>] : 172.25.146.81
	
	I0318 13:28:45.942518    8996 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:28:45.948871    8996 main.go:141] libmachine: Using SSH client type: native
	I0318 13:28:45.949595    8996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.146.81 22 <nil> <nil>}
	I0318 13:28:45.949595    8996 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0318 13:28:46.138793    8996 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0318 13:28:46.138793    8996 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-437700 ).state
	I0318 13:28:49.244819    8996 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:28:49.244819    8996 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:28:49.245013    8996 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-437700 ).networkadapters[0]).ipaddresses[0]
	I0318 13:28:46.432878   12900 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 13:28:46.432878   12900 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0318 13:28:46.432878   12900 cache.go:56] Caching tarball of preloaded images
	I0318 13:28:46.433533   12900 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0318 13:28:46.433700   12900 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 13:28:46.433700   12900 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\pause-320800\config.json ...
	I0318 13:28:46.433700   12900 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\pause-320800\config.json: {Name:mkbb153d338d25a6a8b5df5dc66472f08daa2c70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:28:46.434620   12900 start.go:360] acquireMachinesLock for pause-320800: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:28:52.005020    8996 main.go:141] libmachine: [stdout =====>] : 172.25.146.81
	
	I0318 13:28:52.005256    8996 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:28:52.011637    8996 main.go:141] libmachine: Using SSH client type: native
	I0318 13:28:52.012273    8996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.146.81 22 <nil> <nil>}
	I0318 13:28:52.012273    8996 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0318 13:28:53.440104    8996 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0318 13:28:53.440104    8996 machine.go:97] duration metric: took 50.1445881s to provisionDockerMachine
	I0318 13:28:53.440104    8996 start.go:293] postStartSetup for "stopped-upgrade-437700" (driver="hyperv")
	I0318 13:28:53.440104    8996 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:28:53.455182    8996 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:28:53.455347    8996 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-437700 ).state
	I0318 13:28:55.798059    8996 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:28:55.798059    8996 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:28:55.798233    8996 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-437700 ).networkadapters[0]).ipaddresses[0]
	I0318 13:28:58.533963    8996 main.go:141] libmachine: [stdout =====>] : 172.25.146.81
	
	I0318 13:28:58.533963    8996 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:28:58.534526    8996 sshutil.go:53] new ssh client: &{IP:172.25.146.81 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\stopped-upgrade-437700\id_rsa Username:docker}
	I0318 13:28:58.652521    8996 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1973076s)
	I0318 13:28:58.667197    8996 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:28:58.674305    8996 info.go:137] Remote host: Buildroot 2021.02.12
	I0318 13:28:58.674305    8996 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0318 13:28:58.674305    8996 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0318 13:28:58.676091    8996 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem -> 91202.pem in /etc/ssl/certs
	I0318 13:28:58.690405    8996 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:28:58.707391    8996 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem --> /etc/ssl/certs/91202.pem (1708 bytes)
	I0318 13:28:58.756504    8996 start.go:296] duration metric: took 5.316367s for postStartSetup
	I0318 13:28:58.756599    8996 fix.go:56] duration metric: took 1m39.0167698s for fixHost
	I0318 13:28:58.756704    8996 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-437700 ).state
	
	
	==> Docker <==
	Mar 18 13:28:47 cert-expiration-387900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Mar 18 13:28:47 cert-expiration-387900 systemd[1]: Failed to start Docker Application Container Engine.
	Mar 18 13:28:47 cert-expiration-387900 cri-dockerd[1229]: time="2024-03-18T13:28:47Z" level=error msg="Unable to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Mar 18 13:28:47 cert-expiration-387900 cri-dockerd[1229]: time="2024-03-18T13:28:47Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	Mar 18 13:28:47 cert-expiration-387900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Mar 18 13:28:47 cert-expiration-387900 systemd[1]: Stopped Docker Application Container Engine.
	Mar 18 13:28:47 cert-expiration-387900 systemd[1]: Starting Docker Application Container Engine...
	Mar 18 13:28:47 cert-expiration-387900 dockerd[7560]: time="2024-03-18T13:28:47.978605581Z" level=info msg="Starting up"
	Mar 18 13:29:48 cert-expiration-387900 dockerd[7560]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Mar 18 13:29:48 cert-expiration-387900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Mar 18 13:29:48 cert-expiration-387900 cri-dockerd[1229]: time="2024-03-18T13:29:48Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	Mar 18 13:29:48 cert-expiration-387900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Mar 18 13:29:48 cert-expiration-387900 systemd[1]: Failed to start Docker Application Container Engine.
	Mar 18 13:29:48 cert-expiration-387900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Mar 18 13:29:48 cert-expiration-387900 systemd[1]: Stopped Docker Application Container Engine.
	Mar 18 13:29:48 cert-expiration-387900 systemd[1]: Starting Docker Application Container Engine...
	Mar 18 13:29:48 cert-expiration-387900 dockerd[7783]: time="2024-03-18T13:29:48.286946974Z" level=info msg="Starting up"
	Mar 18 13:30:48 cert-expiration-387900 dockerd[7783]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Mar 18 13:30:48 cert-expiration-387900 cri-dockerd[1229]: time="2024-03-18T13:30:48Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	Mar 18 13:30:48 cert-expiration-387900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Mar 18 13:30:48 cert-expiration-387900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Mar 18 13:30:48 cert-expiration-387900 systemd[1]: Failed to start Docker Application Container Engine.
	Mar 18 13:30:48 cert-expiration-387900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
	Mar 18 13:30:48 cert-expiration-387900 systemd[1]: Stopped Docker Application Container Engine.
	Mar 18 13:30:48 cert-expiration-387900 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-03-18T13:30:50Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.179121] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[Mar18 13:18] systemd-fstab-generator[946]: Ignoring "noauto" option for root device
	[  +0.112018] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.612291] systemd-fstab-generator[986]: Ignoring "noauto" option for root device
	[  +0.237268] systemd-fstab-generator[998]: Ignoring "noauto" option for root device
	[  +0.268990] systemd-fstab-generator[1012]: Ignoring "noauto" option for root device
	[  +2.882729] systemd-fstab-generator[1182]: Ignoring "noauto" option for root device
	[  +0.209994] systemd-fstab-generator[1195]: Ignoring "noauto" option for root device
	[  +0.240910] systemd-fstab-generator[1206]: Ignoring "noauto" option for root device
	[  +0.319447] systemd-fstab-generator[1221]: Ignoring "noauto" option for root device
	[ +13.911587] systemd-fstab-generator[1329]: Ignoring "noauto" option for root device
	[  +0.123662] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.594037] systemd-fstab-generator[1525]: Ignoring "noauto" option for root device
	[  +6.520033] systemd-fstab-generator[1792]: Ignoring "noauto" option for root device
	[  +0.118411] kauditd_printk_skb: 73 callbacks suppressed
	[Mar18 13:19] systemd-fstab-generator[2811]: Ignoring "noauto" option for root device
	[  +0.167866] kauditd_printk_skb: 62 callbacks suppressed
	[  +1.159487] systemd-fstab-generator[2998]: Ignoring "noauto" option for root device
	[ +12.289503] kauditd_printk_skb: 34 callbacks suppressed
	[Mar18 13:27] systemd-fstab-generator[6917]: Ignoring "noauto" option for root device
	[  +0.203438] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.650427] systemd-fstab-generator[6954]: Ignoring "noauto" option for root device
	[  +0.337122] systemd-fstab-generator[6967]: Ignoring "noauto" option for root device
	[  +0.366517] systemd-fstab-generator[6980]: Ignoring "noauto" option for root device
	[  +5.492443] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 13:31:48 up 15 min,  0 users,  load average: 0.09, 0.47, 0.36
	Linux cert-expiration-387900 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Mar 18 13:31:43 cert-expiration-387900 kubelet[2837]: E0318 13:31:43.451556    2837 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-cert-expiration-387900.17bdded90b5c4d3b", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-cert-expiration-387900", UID:"0f31013a599bcab188068af859e32aab", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: Get \"https://172.25.153.24:8443/readyz\"
: dial tcp 172.25.153.24:8443: connect: connection refused", Source:v1.EventSource{Component:"kubelet", Host:"cert-expiration-387900"}, FirstTimestamp:time.Date(2024, time.March, 18, 13, 27, 37, 250524475, time.Local), LastTimestamp:time.Date(2024, time.March, 18, 13, 27, 39, 249337410, time.Local), Count:3, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"cert-expiration-387900"}': 'Patch "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-cert-expiration-387900.17bdded90b5c4d3b": dial tcp 172.25.153.24:8443: connect: connection refused'(may retry after sleeping)
	Mar 18 13:31:44 cert-expiration-387900 kubelet[2837]: E0318 13:31:44.360683    2837 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 4m8.958481442s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Mar 18 13:31:47 cert-expiration-387900 kubelet[2837]: E0318 13:31:47.722135    2837 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/cert-expiration-387900?timeout=10s\": dial tcp 172.25.153.24:8443: connect: connection refused" interval="7s"
	Mar 18 13:31:48 cert-expiration-387900 kubelet[2837]: E0318 13:31:48.292096    2837 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"cert-expiration-387900\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/cert-expiration-387900?resourceVersion=0&timeout=10s\": dial tcp 172.25.153.24:8443: connect: connection refused"
	Mar 18 13:31:48 cert-expiration-387900 kubelet[2837]: E0318 13:31:48.293127    2837 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"cert-expiration-387900\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/cert-expiration-387900?timeout=10s\": dial tcp 172.25.153.24:8443: connect: connection refused"
	Mar 18 13:31:48 cert-expiration-387900 kubelet[2837]: E0318 13:31:48.294703    2837 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"cert-expiration-387900\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/cert-expiration-387900?timeout=10s\": dial tcp 172.25.153.24:8443: connect: connection refused"
	Mar 18 13:31:48 cert-expiration-387900 kubelet[2837]: E0318 13:31:48.296081    2837 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"cert-expiration-387900\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/cert-expiration-387900?timeout=10s\": dial tcp 172.25.153.24:8443: connect: connection refused"
	Mar 18 13:31:48 cert-expiration-387900 kubelet[2837]: E0318 13:31:48.297631    2837 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"cert-expiration-387900\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/cert-expiration-387900?timeout=10s\": dial tcp 172.25.153.24:8443: connect: connection refused"
	Mar 18 13:31:48 cert-expiration-387900 kubelet[2837]: E0318 13:31:48.297765    2837 kubelet_node_status.go:527] "Unable to update node status" err="update node status exceeds retry count"
	Mar 18 13:31:48 cert-expiration-387900 kubelet[2837]: E0318 13:31:48.593333    2837 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Mar 18 13:31:48 cert-expiration-387900 kubelet[2837]: E0318 13:31:48.593403    2837 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Mar 18 13:31:48 cert-expiration-387900 kubelet[2837]: E0318 13:31:48.593418    2837 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Mar 18 13:31:48 cert-expiration-387900 kubelet[2837]: E0318 13:31:48.593448    2837 kuberuntime_image.go:103] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Mar 18 13:31:48 cert-expiration-387900 kubelet[2837]: I0318 13:31:48.593464    2837 image_gc_manager.go:210] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Mar 18 13:31:48 cert-expiration-387900 kubelet[2837]: E0318 13:31:48.593577    2837 kubelet.go:2865] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Mar 18 13:31:48 cert-expiration-387900 kubelet[2837]: E0318 13:31:48.593624    2837 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Mar 18 13:31:48 cert-expiration-387900 kubelet[2837]: E0318 13:31:48.593645    2837 container_log_manager.go:185] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Mar 18 13:31:48 cert-expiration-387900 kubelet[2837]: E0318 13:31:48.593650    2837 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Mar 18 13:31:48 cert-expiration-387900 kubelet[2837]: E0318 13:31:48.596464    2837 kuberuntime_container.go:477] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Mar 18 13:31:48 cert-expiration-387900 kubelet[2837]: E0318 13:31:48.594775    2837 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Mar 18 13:31:48 cert-expiration-387900 kubelet[2837]: E0318 13:31:48.599013    2837 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Mar 18 13:31:48 cert-expiration-387900 kubelet[2837]: E0318 13:31:48.599139    2837 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Mar 18 13:31:48 cert-expiration-387900 kubelet[2837]: E0318 13:31:48.599378    2837 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Mar 18 13:31:48 cert-expiration-387900 kubelet[2837]: E0318 13:31:48.599671    2837 kuberuntime_container.go:477] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Mar 18 13:31:48 cert-expiration-387900 kubelet[2837]: E0318 13:31:48.601407    2837 kubelet.go:1402] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 13:29:01.948513    1624 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0318 13:29:48.021142    1624 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0318 13:29:48.061090    1624 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0318 13:29:48.103288    1624 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0318 13:29:48.148994    1624 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0318 13:29:48.195420    1624 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0318 13:30:48.322018    1624 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0318 13:30:48.374431    1624 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0318 13:30:48.427441    1624 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p cert-expiration-387900 -n cert-expiration-387900
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p cert-expiration-387900 -n cert-expiration-387900: exit status 2 (13.4997716s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 13:31:49.314961    7232 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "cert-expiration-387900" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-387900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-387900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-387900: (1m8.5748582s)
--- FAIL: TestCertExpiration (1355.31s)

                                                
                                    
x
+
TestErrorSpam/setup (203.02s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-013800 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-013800 --driver=hyperv
E0318 10:47:21.935293    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
E0318 10:47:21.950382    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
E0318 10:47:21.966081    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
E0318 10:47:21.997156    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
E0318 10:47:22.044252    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
E0318 10:47:22.139182    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
E0318 10:47:22.311035    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
E0318 10:47:22.644979    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
E0318 10:47:23.295643    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
E0318 10:47:24.586614    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
E0318 10:47:27.155817    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
E0318 10:47:32.285458    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
E0318 10:47:42.533272    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
E0318 10:48:03.013976    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
E0318 10:48:43.983624    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
E0318 10:50:05.905837    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-013800 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-013800 --driver=hyperv: (3m23.0226575s)
error_spam_test.go:96: unexpected stderr: "W0318 10:47:08.713691    5728 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:110: minikube stdout:
* [nospam-013800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
- KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
- MINIKUBE_LOCATION=18431
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting "nospam-013800" primary control-plane node in "nospam-013800" cluster
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-013800" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
W0318 10:47:08.713691    5728 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
--- FAIL: TestErrorSpam/setup (203.02s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (35.28s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:731: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-499500 -n functional-499500
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-499500 -n functional-499500: (12.4102421s)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 logs -n 25: (9.0653097s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-013800 --log_dir                                     | nospam-013800     | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:51 UTC | 18 Mar 24 10:51 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-013800 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-013800 --log_dir                                     | nospam-013800     | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:51 UTC | 18 Mar 24 10:51 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-013800 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-013800 --log_dir                                     | nospam-013800     | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:51 UTC | 18 Mar 24 10:52 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-013800 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-013800 --log_dir                                     | nospam-013800     | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:52 UTC | 18 Mar 24 10:52 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-013800 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-013800 --log_dir                                     | nospam-013800     | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:52 UTC | 18 Mar 24 10:52 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-013800 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-013800 --log_dir                                     | nospam-013800     | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:52 UTC | 18 Mar 24 10:53 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-013800 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-013800 --log_dir                                     | nospam-013800     | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:53 UTC | 18 Mar 24 10:53 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-013800 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-013800                                            | nospam-013800     | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:53 UTC | 18 Mar 24 10:53 UTC |
	| start   | -p functional-499500                                        | functional-499500 | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:53 UTC | 18 Mar 24 10:57 UTC |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                  |                   |                   |         |                     |                     |
	| start   | -p functional-499500                                        | functional-499500 | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:57 UTC | 18 Mar 24 10:59 UTC |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-499500 cache add                                 | functional-499500 | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:59 UTC | 18 Mar 24 11:00 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-499500 cache add                                 | functional-499500 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:00 UTC | 18 Mar 24 11:00 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-499500 cache add                                 | functional-499500 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:00 UTC | 18 Mar 24 11:00 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-499500 cache add                                 | functional-499500 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:00 UTC | 18 Mar 24 11:00 UTC |
	|         | minikube-local-cache-test:functional-499500                 |                   |                   |         |                     |                     |
	| cache   | functional-499500 cache delete                              | functional-499500 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:00 UTC | 18 Mar 24 11:00 UTC |
	|         | minikube-local-cache-test:functional-499500                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:00 UTC | 18 Mar 24 11:00 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:00 UTC | 18 Mar 24 11:00 UTC |
	| ssh     | functional-499500 ssh sudo                                  | functional-499500 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:00 UTC | 18 Mar 24 11:00 UTC |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-499500                                           | functional-499500 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:00 UTC | 18 Mar 24 11:00 UTC |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-499500 ssh                                       | functional-499500 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:00 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-499500 cache reload                              | functional-499500 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:01 UTC | 18 Mar 24 11:01 UTC |
	| ssh     | functional-499500 ssh                                       | functional-499500 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:01 UTC | 18 Mar 24 11:01 UTC |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:01 UTC | 18 Mar 24 11:01 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:01 UTC | 18 Mar 24 11:01 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-499500 kubectl --                                | functional-499500 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:01 UTC | 18 Mar 24 11:01 UTC |
	|         | --context functional-499500                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 10:57:42
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 10:57:42.975060   14240 out.go:291] Setting OutFile to fd 576 ...
	I0318 10:57:42.977145   14240 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 10:57:42.977220   14240 out.go:304] Setting ErrFile to fd 984...
	I0318 10:57:42.977220   14240 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 10:57:43.001935   14240 out.go:298] Setting JSON to false
	I0318 10:57:43.005881   14240 start.go:129] hostinfo: {"hostname":"minikube6","uptime":134787,"bootTime":1710624675,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0318 10:57:43.005881   14240 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 10:57:43.048835   14240 out.go:177] * [functional-499500] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	I0318 10:57:43.052223   14240 notify.go:220] Checking for updates...
	I0318 10:57:43.056491   14240 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0318 10:57:43.059781   14240 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 10:57:43.062331   14240 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0318 10:57:43.064516   14240 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 10:57:43.067367   14240 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 10:57:43.070905   14240 config.go:182] Loaded profile config "functional-499500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 10:57:43.071622   14240 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 10:57:48.571570   14240 out.go:177] * Using the hyperv driver based on existing profile
	I0318 10:57:48.574964   14240 start.go:297] selected driver: hyperv
	I0318 10:57:48.574964   14240 start.go:901] validating driver "hyperv" against &{Name:functional-499500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.28.4 ClusterName:functional-499500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.151.65 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 10:57:48.575733   14240 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 10:57:48.627206   14240 cni.go:84] Creating CNI manager for ""
	I0318 10:57:48.627280   14240 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 10:57:48.627391   14240 start.go:340] cluster config:
	{Name:functional-499500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-499500 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.151.65 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 10:57:48.627391   14240 iso.go:125] acquiring lock: {Name:mk859ea173f7c19f70b69d7017f4a5a661cd1500 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 10:57:48.633205   14240 out.go:177] * Starting "functional-499500" primary control-plane node in "functional-499500" cluster
	I0318 10:57:48.638242   14240 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 10:57:48.639172   14240 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0318 10:57:48.639172   14240 cache.go:56] Caching tarball of preloaded images
	I0318 10:57:48.639172   14240 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0318 10:57:48.639172   14240 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 10:57:48.639172   14240 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\config.json ...
	I0318 10:57:48.641736   14240 start.go:360] acquireMachinesLock for functional-499500: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 10:57:48.641736   14240 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-499500"
	I0318 10:57:48.641736   14240 start.go:96] Skipping create...Using existing machine configuration
	I0318 10:57:48.642734   14240 fix.go:54] fixHost starting: 
	I0318 10:57:48.642734   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-499500 ).state
	I0318 10:57:51.437174   14240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:57:51.437594   14240 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:57:51.437594   14240 fix.go:112] recreateIfNeeded on functional-499500: state=Running err=<nil>
	W0318 10:57:51.437688   14240 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 10:57:51.440793   14240 out.go:177] * Updating the running hyperv "functional-499500" VM ...
	I0318 10:57:51.444528   14240 machine.go:94] provisionDockerMachine start ...
	I0318 10:57:51.444528   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-499500 ).state
	I0318 10:57:53.639271   14240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:57:53.639676   14240 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:57:53.639828   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-499500 ).networkadapters[0]).ipaddresses[0]
	I0318 10:57:56.217759   14240 main.go:141] libmachine: [stdout =====>] : 172.25.151.65
	
	I0318 10:57:56.218264   14240 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:57:56.223935   14240 main.go:141] libmachine: Using SSH client type: native
	I0318 10:57:56.224264   14240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.151.65 22 <nil> <nil>}
	I0318 10:57:56.224264   14240 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 10:57:56.348359   14240 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-499500
	
	I0318 10:57:56.348436   14240 buildroot.go:166] provisioning hostname "functional-499500"
	I0318 10:57:56.348436   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-499500 ).state
	I0318 10:57:58.492321   14240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:57:58.492321   14240 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:57:58.492321   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-499500 ).networkadapters[0]).ipaddresses[0]
	I0318 10:58:01.060856   14240 main.go:141] libmachine: [stdout =====>] : 172.25.151.65
	
	I0318 10:58:01.060856   14240 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:58:01.066523   14240 main.go:141] libmachine: Using SSH client type: native
	I0318 10:58:01.067206   14240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.151.65 22 <nil> <nil>}
	I0318 10:58:01.067206   14240 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-499500 && echo "functional-499500" | sudo tee /etc/hostname
	I0318 10:58:01.224436   14240 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-499500
	
	I0318 10:58:01.224436   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-499500 ).state
	I0318 10:58:03.394377   14240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:58:03.394377   14240 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:58:03.394377   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-499500 ).networkadapters[0]).ipaddresses[0]
	I0318 10:58:06.024890   14240 main.go:141] libmachine: [stdout =====>] : 172.25.151.65
	
	I0318 10:58:06.025065   14240 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:58:06.030749   14240 main.go:141] libmachine: Using SSH client type: native
	I0318 10:58:06.031538   14240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.151.65 22 <nil> <nil>}
	I0318 10:58:06.031538   14240 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-499500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-499500/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-499500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 10:58:06.158010   14240 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 10:58:06.158100   14240 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0318 10:58:06.158100   14240 buildroot.go:174] setting up certificates
	I0318 10:58:06.158214   14240 provision.go:84] configureAuth start
	I0318 10:58:06.158264   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-499500 ).state
	I0318 10:58:08.320904   14240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:58:08.320904   14240 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:58:08.321872   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-499500 ).networkadapters[0]).ipaddresses[0]
	I0318 10:58:10.957813   14240 main.go:141] libmachine: [stdout =====>] : 172.25.151.65
	
	I0318 10:58:10.958169   14240 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:58:10.958249   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-499500 ).state
	I0318 10:58:13.211929   14240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:58:13.211929   14240 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:58:13.211929   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-499500 ).networkadapters[0]).ipaddresses[0]
	I0318 10:58:15.884326   14240 main.go:141] libmachine: [stdout =====>] : 172.25.151.65
	
	I0318 10:58:15.884326   14240 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:58:15.884326   14240 provision.go:143] copyHostCerts
	I0318 10:58:15.885092   14240 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0318 10:58:15.885092   14240 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0318 10:58:15.885092   14240 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0318 10:58:15.885696   14240 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0318 10:58:15.887036   14240 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0318 10:58:15.887355   14240 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0318 10:58:15.887355   14240 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0318 10:58:15.887773   14240 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0318 10:58:15.888807   14240 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0318 10:58:15.889121   14240 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0318 10:58:15.889121   14240 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0318 10:58:15.889510   14240 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0318 10:58:15.890570   14240 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-499500 san=[127.0.0.1 172.25.151.65 functional-499500 localhost minikube]
	I0318 10:58:16.497776   14240 provision.go:177] copyRemoteCerts
	I0318 10:58:16.510361   14240 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 10:58:16.510443   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-499500 ).state
	I0318 10:58:18.668219   14240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:58:18.668219   14240 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:58:18.668219   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-499500 ).networkadapters[0]).ipaddresses[0]
	I0318 10:58:21.279495   14240 main.go:141] libmachine: [stdout =====>] : 172.25.151.65
	
	I0318 10:58:21.279495   14240 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:58:21.280031   14240 sshutil.go:53] new ssh client: &{IP:172.25.151.65 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-499500\id_rsa Username:docker}
	I0318 10:58:21.387954   14240 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8775625s)
	I0318 10:58:21.387954   14240 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0318 10:58:21.388960   14240 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 10:58:21.439315   14240 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0318 10:58:21.439522   14240 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0318 10:58:21.491674   14240 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0318 10:58:21.491703   14240 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 10:58:21.539223   14240 provision.go:87] duration metric: took 15.3809122s to configureAuth
	I0318 10:58:21.539307   14240 buildroot.go:189] setting minikube options for container-runtime
	I0318 10:58:21.539970   14240 config.go:182] Loaded profile config "functional-499500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 10:58:21.540190   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-499500 ).state
	I0318 10:58:23.693976   14240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:58:23.694230   14240 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:58:23.694429   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-499500 ).networkadapters[0]).ipaddresses[0]
	I0318 10:58:26.341451   14240 main.go:141] libmachine: [stdout =====>] : 172.25.151.65
	
	I0318 10:58:26.341451   14240 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:58:26.348006   14240 main.go:141] libmachine: Using SSH client type: native
	I0318 10:58:26.348153   14240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.151.65 22 <nil> <nil>}
	I0318 10:58:26.348153   14240 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0318 10:58:26.487099   14240 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0318 10:58:26.487160   14240 buildroot.go:70] root file system type: tmpfs
	I0318 10:58:26.487341   14240 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0318 10:58:26.487473   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-499500 ).state
	I0318 10:58:28.725212   14240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:58:28.725212   14240 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:58:28.725212   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-499500 ).networkadapters[0]).ipaddresses[0]
	I0318 10:58:31.342279   14240 main.go:141] libmachine: [stdout =====>] : 172.25.151.65
	
	I0318 10:58:31.342279   14240 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:58:31.348816   14240 main.go:141] libmachine: Using SSH client type: native
	I0318 10:58:31.348885   14240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.151.65 22 <nil> <nil>}
	I0318 10:58:31.348885   14240 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0318 10:58:31.512930   14240 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0318 10:58:31.513016   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-499500 ).state
	I0318 10:58:33.715267   14240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:58:33.715267   14240 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:58:33.715267   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-499500 ).networkadapters[0]).ipaddresses[0]
	I0318 10:58:36.397393   14240 main.go:141] libmachine: [stdout =====>] : 172.25.151.65
	
	I0318 10:58:36.397671   14240 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:58:36.405141   14240 main.go:141] libmachine: Using SSH client type: native
	I0318 10:58:36.406146   14240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.151.65 22 <nil> <nil>}
	I0318 10:58:36.406146   14240 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0318 10:58:36.546041   14240 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 10:58:36.546041   14240 machine.go:97] duration metric: took 45.1012295s to provisionDockerMachine
	I0318 10:58:36.546041   14240 start.go:293] postStartSetup for "functional-499500" (driver="hyperv")
	I0318 10:58:36.546041   14240 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 10:58:36.561794   14240 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 10:58:36.561794   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-499500 ).state
	I0318 10:58:38.780468   14240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:58:38.780468   14240 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:58:38.780538   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-499500 ).networkadapters[0]).ipaddresses[0]
	I0318 10:58:41.382120   14240 main.go:141] libmachine: [stdout =====>] : 172.25.151.65
	
	I0318 10:58:41.382120   14240 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:58:41.382663   14240 sshutil.go:53] new ssh client: &{IP:172.25.151.65 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-499500\id_rsa Username:docker}
	I0318 10:58:41.485720   14240 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9235405s)
	I0318 10:58:41.497742   14240 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 10:58:41.505559   14240 command_runner.go:130] > NAME=Buildroot
	I0318 10:58:41.505559   14240 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0318 10:58:41.505559   14240 command_runner.go:130] > ID=buildroot
	I0318 10:58:41.505559   14240 command_runner.go:130] > VERSION_ID=2023.02.9
	I0318 10:58:41.505559   14240 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0318 10:58:41.505559   14240 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 10:58:41.505559   14240 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0318 10:58:41.506547   14240 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0318 10:58:41.507477   14240 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem -> 91202.pem in /etc/ssl/certs
	I0318 10:58:41.507477   14240 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem -> /etc/ssl/certs/91202.pem
	I0318 10:58:41.508726   14240 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9120\hosts -> hosts in /etc/test/nested/copy/9120
	I0318 10:58:41.508726   14240 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9120\hosts -> /etc/test/nested/copy/9120/hosts
	I0318 10:58:41.520339   14240 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/9120
	I0318 10:58:41.539406   14240 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem --> /etc/ssl/certs/91202.pem (1708 bytes)
	I0318 10:58:41.590081   14240 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9120\hosts --> /etc/test/nested/copy/9120/hosts (40 bytes)
	I0318 10:58:41.639872   14240 start.go:296] duration metric: took 5.0937989s for postStartSetup
	I0318 10:58:41.639872   14240 fix.go:56] duration metric: took 52.9968045s for fixHost
	I0318 10:58:41.640870   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-499500 ).state
	I0318 10:58:43.857467   14240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:58:43.858551   14240 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:58:43.858551   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-499500 ).networkadapters[0]).ipaddresses[0]
	I0318 10:58:46.476376   14240 main.go:141] libmachine: [stdout =====>] : 172.25.151.65
	
	I0318 10:58:46.476376   14240 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:58:46.482257   14240 main.go:141] libmachine: Using SSH client type: native
	I0318 10:58:46.483159   14240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.151.65 22 <nil> <nil>}
	I0318 10:58:46.483159   14240 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 10:58:46.605116   14240 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710759526.593595921
	
	I0318 10:58:46.605233   14240 fix.go:216] guest clock: 1710759526.593595921
	I0318 10:58:46.605233   14240 fix.go:229] Guest: 2024-03-18 10:58:46.593595921 +0000 UTC Remote: 2024-03-18 10:58:41.6398727 +0000 UTC m=+58.853322801 (delta=4.953723221s)
	I0318 10:58:46.605376   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-499500 ).state
	I0318 10:58:48.782980   14240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:58:48.782980   14240 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:58:48.783095   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-499500 ).networkadapters[0]).ipaddresses[0]
	I0318 10:58:51.397145   14240 main.go:141] libmachine: [stdout =====>] : 172.25.151.65
	
	I0318 10:58:51.397145   14240 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:58:51.403207   14240 main.go:141] libmachine: Using SSH client type: native
	I0318 10:58:51.403983   14240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.151.65 22 <nil> <nil>}
	I0318 10:58:51.403983   14240 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1710759526
	I0318 10:58:51.549876   14240 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Mar 18 10:58:46 UTC 2024
	
	I0318 10:58:51.549876   14240 fix.go:236] clock set: Mon Mar 18 10:58:46 UTC 2024
	 (err=<nil>)
	I0318 10:58:51.549876   14240 start.go:83] releasing machines lock for "functional-499500", held for 1m2.9077434s
	I0318 10:58:51.550149   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-499500 ).state
	I0318 10:58:53.727057   14240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:58:53.727890   14240 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:58:53.728003   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-499500 ).networkadapters[0]).ipaddresses[0]
	I0318 10:58:56.351045   14240 main.go:141] libmachine: [stdout =====>] : 172.25.151.65
	
	I0318 10:58:56.351270   14240 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:58:56.356919   14240 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 10:58:56.357145   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-499500 ).state
	I0318 10:58:56.369404   14240 ssh_runner.go:195] Run: cat /version.json
	I0318 10:58:56.369404   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-499500 ).state
	I0318 10:58:58.642059   14240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:58:58.642059   14240 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:58:58.642059   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-499500 ).networkadapters[0]).ipaddresses[0]
	I0318 10:58:58.653974   14240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:58:58.653974   14240 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:58:58.654149   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-499500 ).networkadapters[0]).ipaddresses[0]
	I0318 10:59:01.409383   14240 main.go:141] libmachine: [stdout =====>] : 172.25.151.65
	
	I0318 10:59:01.409383   14240 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:59:01.410229   14240 sshutil.go:53] new ssh client: &{IP:172.25.151.65 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-499500\id_rsa Username:docker}
	I0318 10:59:01.440260   14240 main.go:141] libmachine: [stdout =====>] : 172.25.151.65
	
	I0318 10:59:01.440260   14240 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:59:01.440337   14240 sshutil.go:53] new ssh client: &{IP:172.25.151.65 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-499500\id_rsa Username:docker}
	I0318 10:59:01.511792   14240 command_runner.go:130] > {"iso_version": "v1.32.1-1710520390-17991", "kicbase_version": "v0.0.42-1710284843-18375", "minikube_version": "v1.32.0", "commit": "3dd306d082737a9ddf335108b42c9fcb2ad84298"}
	I0318 10:59:01.511792   14240 ssh_runner.go:235] Completed: cat /version.json: (5.1423551s)
	I0318 10:59:01.526518   14240 ssh_runner.go:195] Run: systemctl --version
	I0318 10:59:01.586115   14240 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0318 10:59:01.586221   14240 command_runner.go:130] > systemd 252 (252)
	I0318 10:59:01.586221   14240 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0318 10:59:01.586221   14240 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2291671s)
	I0318 10:59:01.600982   14240 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0318 10:59:01.609269   14240 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0318 10:59:01.609683   14240 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 10:59:01.623038   14240 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 10:59:01.642668   14240 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0318 10:59:01.642668   14240 start.go:494] detecting cgroup driver to use...
	I0318 10:59:01.642668   14240 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 10:59:01.680804   14240 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0318 10:59:01.696721   14240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0318 10:59:01.730642   14240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0318 10:59:01.760620   14240 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0318 10:59:01.774352   14240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0318 10:59:01.810035   14240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 10:59:01.844026   14240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0318 10:59:01.880885   14240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 10:59:01.914967   14240 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 10:59:01.950703   14240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0318 10:59:01.989434   14240 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 10:59:02.008761   14240 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0318 10:59:02.020160   14240 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 10:59:02.054972   14240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 10:59:02.336021   14240 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0318 10:59:02.374004   14240 start.go:494] detecting cgroup driver to use...
	I0318 10:59:02.386920   14240 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0318 10:59:02.420865   14240 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0318 10:59:02.421021   14240 command_runner.go:130] > [Unit]
	I0318 10:59:02.421021   14240 command_runner.go:130] > Description=Docker Application Container Engine
	I0318 10:59:02.421021   14240 command_runner.go:130] > Documentation=https://docs.docker.com
	I0318 10:59:02.421021   14240 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0318 10:59:02.421021   14240 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0318 10:59:02.421021   14240 command_runner.go:130] > StartLimitBurst=3
	I0318 10:59:02.421021   14240 command_runner.go:130] > StartLimitIntervalSec=60
	I0318 10:59:02.421021   14240 command_runner.go:130] > [Service]
	I0318 10:59:02.421021   14240 command_runner.go:130] > Type=notify
	I0318 10:59:02.421021   14240 command_runner.go:130] > Restart=on-failure
	I0318 10:59:02.421149   14240 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0318 10:59:02.421149   14240 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0318 10:59:02.421149   14240 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0318 10:59:02.421149   14240 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0318 10:59:02.421227   14240 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0318 10:59:02.421265   14240 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0318 10:59:02.421287   14240 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0318 10:59:02.421287   14240 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0318 10:59:02.421287   14240 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0318 10:59:02.421287   14240 command_runner.go:130] > ExecStart=
	I0318 10:59:02.421287   14240 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0318 10:59:02.421287   14240 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0318 10:59:02.421419   14240 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0318 10:59:02.421419   14240 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0318 10:59:02.421419   14240 command_runner.go:130] > LimitNOFILE=infinity
	I0318 10:59:02.421419   14240 command_runner.go:130] > LimitNPROC=infinity
	I0318 10:59:02.421419   14240 command_runner.go:130] > LimitCORE=infinity
	I0318 10:59:02.421546   14240 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0318 10:59:02.421546   14240 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0318 10:59:02.421546   14240 command_runner.go:130] > TasksMax=infinity
	I0318 10:59:02.421546   14240 command_runner.go:130] > TimeoutStartSec=0
	I0318 10:59:02.421546   14240 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0318 10:59:02.421546   14240 command_runner.go:130] > Delegate=yes
	I0318 10:59:02.421546   14240 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0318 10:59:02.421546   14240 command_runner.go:130] > KillMode=process
	I0318 10:59:02.421633   14240 command_runner.go:130] > [Install]
	I0318 10:59:02.421660   14240 command_runner.go:130] > WantedBy=multi-user.target
	I0318 10:59:02.434945   14240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 10:59:02.474172   14240 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 10:59:02.536480   14240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 10:59:02.574934   14240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 10:59:02.599013   14240 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 10:59:02.639686   14240 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0318 10:59:02.653664   14240 ssh_runner.go:195] Run: which cri-dockerd
	I0318 10:59:02.660322   14240 command_runner.go:130] > /usr/bin/cri-dockerd
	I0318 10:59:02.672861   14240 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0318 10:59:02.691063   14240 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0318 10:59:02.739261   14240 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0318 10:59:03.048935   14240 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0318 10:59:03.343339   14240 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0318 10:59:03.343589   14240 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0318 10:59:03.397015   14240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 10:59:03.693962   14240 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 10:59:16.601009   14240 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.9069659s)
	I0318 10:59:16.613650   14240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0318 10:59:16.654739   14240 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0318 10:59:16.695715   14240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 10:59:16.730949   14240 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0318 10:59:16.950383   14240 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0318 10:59:17.185968   14240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 10:59:17.408100   14240 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0318 10:59:17.449642   14240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 10:59:17.486983   14240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 10:59:17.704603   14240 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0318 10:59:17.824828   14240 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0318 10:59:17.837294   14240 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0318 10:59:17.846078   14240 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0318 10:59:17.846078   14240 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0318 10:59:17.846078   14240 command_runner.go:130] > Device: 0,22	Inode: 1514        Links: 1
	I0318 10:59:17.846078   14240 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0318 10:59:17.846078   14240 command_runner.go:130] > Access: 2024-03-18 10:59:17.820660674 +0000
	I0318 10:59:17.846078   14240 command_runner.go:130] > Modify: 2024-03-18 10:59:17.720640404 +0000
	I0318 10:59:17.846078   14240 command_runner.go:130] > Change: 2024-03-18 10:59:17.724641215 +0000
	I0318 10:59:17.846078   14240 command_runner.go:130] >  Birth: -
	I0318 10:59:17.846078   14240 start.go:562] Will wait 60s for crictl version
	I0318 10:59:17.858715   14240 ssh_runner.go:195] Run: which crictl
	I0318 10:59:17.865342   14240 command_runner.go:130] > /usr/bin/crictl
	I0318 10:59:17.877795   14240 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 10:59:17.952306   14240 command_runner.go:130] > Version:  0.1.0
	I0318 10:59:17.952306   14240 command_runner.go:130] > RuntimeName:  docker
	I0318 10:59:17.953182   14240 command_runner.go:130] > RuntimeVersion:  25.0.4
	I0318 10:59:17.953182   14240 command_runner.go:130] > RuntimeApiVersion:  v1
	I0318 10:59:17.954252   14240 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.4
	RuntimeApiVersion:  v1
	I0318 10:59:17.963896   14240 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 10:59:18.000980   14240 command_runner.go:130] > 25.0.4
	I0318 10:59:18.012006   14240 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 10:59:18.046455   14240 command_runner.go:130] > 25.0.4
	I0318 10:59:18.064785   14240 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	I0318 10:59:18.064785   14240 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0318 10:59:18.069796   14240 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0318 10:59:18.070259   14240 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0318 10:59:18.070259   14240 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0318 10:59:18.070259   14240 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:ae:0d:2c Flags:up|broadcast|multicast|running}
	I0318 10:59:18.073957   14240 ip.go:210] interface addr: fe80::f8a6:d6b6:cc4:1ba0/64
	I0318 10:59:18.074013   14240 ip.go:210] interface addr: 172.25.144.1/20
	I0318 10:59:18.086495   14240 ssh_runner.go:195] Run: grep 172.25.144.1	host.minikube.internal$ /etc/hosts
	I0318 10:59:18.094264   14240 command_runner.go:130] > 172.25.144.1	host.minikube.internal
	I0318 10:59:18.094264   14240 kubeadm.go:877] updating cluster {Name:functional-499500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.28.4 ClusterName:functional-499500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.151.65 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L M
ountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 10:59:18.094788   14240 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 10:59:18.104532   14240 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 10:59:18.133301   14240 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0318 10:59:18.133386   14240 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0318 10:59:18.133386   14240 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0318 10:59:18.133386   14240 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0318 10:59:18.133386   14240 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0318 10:59:18.133386   14240 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0318 10:59:18.133386   14240 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0318 10:59:18.133386   14240 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 10:59:18.133497   14240 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0318 10:59:18.133572   14240 docker.go:615] Images already preloaded, skipping extraction
	I0318 10:59:18.145913   14240 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 10:59:18.172393   14240 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0318 10:59:18.172393   14240 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0318 10:59:18.172393   14240 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0318 10:59:18.172393   14240 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0318 10:59:18.172393   14240 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0318 10:59:18.172393   14240 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0318 10:59:18.172393   14240 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0318 10:59:18.172393   14240 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 10:59:18.172599   14240 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0318 10:59:18.172679   14240 cache_images.go:84] Images are preloaded, skipping loading
	I0318 10:59:18.172739   14240 kubeadm.go:928] updating node { 172.25.151.65 8441 v1.28.4 docker true true} ...
	I0318 10:59:18.172815   14240 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-499500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.151.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:functional-499500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 10:59:18.181920   14240 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0318 10:59:18.216651   14240 command_runner.go:130] > cgroupfs
	I0318 10:59:18.216651   14240 cni.go:84] Creating CNI manager for ""
	I0318 10:59:18.216651   14240 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 10:59:18.216651   14240 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 10:59:18.216651   14240 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.25.151.65 APIServerPort:8441 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-499500 NodeName:functional-499500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.25.151.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.25.151.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 10:59:18.216651   14240 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.25.151.65
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-499500"
	  kubeletExtraArgs:
	    node-ip: 172.25.151.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.25.151.65"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 10:59:18.227616   14240 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 10:59:18.251570   14240 command_runner.go:130] > kubeadm
	I0318 10:59:18.251703   14240 command_runner.go:130] > kubectl
	I0318 10:59:18.251703   14240 command_runner.go:130] > kubelet
	I0318 10:59:18.251703   14240 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 10:59:18.264508   14240 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 10:59:18.281957   14240 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0318 10:59:18.312701   14240 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 10:59:18.343880   14240 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0318 10:59:18.384989   14240 ssh_runner.go:195] Run: grep 172.25.151.65	control-plane.minikube.internal$ /etc/hosts
	I0318 10:59:18.391803   14240 command_runner.go:130] > 172.25.151.65	control-plane.minikube.internal
	I0318 10:59:18.402941   14240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 10:59:18.622906   14240 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 10:59:18.650834   14240 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500 for IP: 172.25.151.65
	I0318 10:59:18.650834   14240 certs.go:194] generating shared ca certs ...
	I0318 10:59:18.650834   14240 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 10:59:18.651678   14240 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0318 10:59:18.651948   14240 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0318 10:59:18.651948   14240 certs.go:256] generating profile certs ...
	I0318 10:59:18.652849   14240 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.key
	I0318 10:59:18.652849   14240 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\apiserver.key.50b085b1
	I0318 10:59:18.652849   14240 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\proxy-client.key
	I0318 10:59:18.653702   14240 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 10:59:18.653702   14240 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0318 10:59:18.653702   14240 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 10:59:18.653702   14240 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 10:59:18.653702   14240 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 10:59:18.653702   14240 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 10:59:18.653702   14240 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 10:59:18.654696   14240 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 10:59:18.654696   14240 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9120.pem (1338 bytes)
	W0318 10:59:18.654696   14240 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9120_empty.pem, impossibly tiny 0 bytes
	I0318 10:59:18.654696   14240 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0318 10:59:18.655735   14240 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0318 10:59:18.655735   14240 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0318 10:59:18.655735   14240 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0318 10:59:18.656690   14240 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem (1708 bytes)
	I0318 10:59:18.656690   14240 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9120.pem -> /usr/share/ca-certificates/9120.pem
	I0318 10:59:18.656690   14240 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem -> /usr/share/ca-certificates/91202.pem
	I0318 10:59:18.656690   14240 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 10:59:18.658702   14240 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 10:59:18.711709   14240 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 10:59:18.762638   14240 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 10:59:18.806537   14240 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 10:59:18.856909   14240 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 10:59:18.907482   14240 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 10:59:18.960833   14240 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 10:59:19.014951   14240 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 10:59:19.066617   14240 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9120.pem --> /usr/share/ca-certificates/9120.pem (1338 bytes)
	I0318 10:59:19.114468   14240 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem --> /usr/share/ca-certificates/91202.pem (1708 bytes)
	I0318 10:59:19.183036   14240 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 10:59:19.230706   14240 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 10:59:19.275997   14240 ssh_runner.go:195] Run: openssl version
	I0318 10:59:19.284433   14240 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0318 10:59:19.296634   14240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9120.pem && ln -fs /usr/share/ca-certificates/9120.pem /etc/ssl/certs/9120.pem"
	I0318 10:59:19.332306   14240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9120.pem
	I0318 10:59:19.340213   14240 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 18 10:53 /usr/share/ca-certificates/9120.pem
	I0318 10:59:19.340213   14240 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 10:53 /usr/share/ca-certificates/9120.pem
	I0318 10:59:19.351203   14240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9120.pem
	I0318 10:59:19.360906   14240 command_runner.go:130] > 51391683
	I0318 10:59:19.375552   14240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9120.pem /etc/ssl/certs/51391683.0"
	I0318 10:59:19.409050   14240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/91202.pem && ln -fs /usr/share/ca-certificates/91202.pem /etc/ssl/certs/91202.pem"
	I0318 10:59:19.444588   14240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91202.pem
	I0318 10:59:19.452329   14240 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 18 10:53 /usr/share/ca-certificates/91202.pem
	I0318 10:59:19.452545   14240 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 10:53 /usr/share/ca-certificates/91202.pem
	I0318 10:59:19.465220   14240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91202.pem
	I0318 10:59:19.473794   14240 command_runner.go:130] > 3ec20f2e
	I0318 10:59:19.486432   14240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/91202.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 10:59:19.520913   14240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 10:59:19.551489   14240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 10:59:19.559197   14240 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 18 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I0318 10:59:19.559254   14240 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I0318 10:59:19.570890   14240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 10:59:19.580299   14240 command_runner.go:130] > b5213941
	I0318 10:59:19.593660   14240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 10:59:19.623289   14240 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 10:59:19.631170   14240 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 10:59:19.631170   14240 command_runner.go:130] >   Size: 1164      	Blocks: 8          IO Block: 4096   regular file
	I0318 10:59:19.631170   14240 command_runner.go:130] > Device: 8,1	Inode: 7336229     Links: 1
	I0318 10:59:19.631252   14240 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0318 10:59:19.631252   14240 command_runner.go:130] > Access: 2024-03-18 10:56:34.179178751 +0000
	I0318 10:59:19.631280   14240 command_runner.go:130] > Modify: 2024-03-18 10:56:34.179178751 +0000
	I0318 10:59:19.631280   14240 command_runner.go:130] > Change: 2024-03-18 10:56:34.179178751 +0000
	I0318 10:59:19.631280   14240 command_runner.go:130] >  Birth: 2024-03-18 10:56:34.179178751 +0000
	I0318 10:59:19.643747   14240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 10:59:19.653100   14240 command_runner.go:130] > Certificate will not expire
	I0318 10:59:19.665259   14240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 10:59:19.674378   14240 command_runner.go:130] > Certificate will not expire
	I0318 10:59:19.686665   14240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 10:59:19.696610   14240 command_runner.go:130] > Certificate will not expire
	I0318 10:59:19.708952   14240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 10:59:19.718034   14240 command_runner.go:130] > Certificate will not expire
	I0318 10:59:19.729765   14240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 10:59:19.738840   14240 command_runner.go:130] > Certificate will not expire
	I0318 10:59:19.754554   14240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 10:59:19.762943   14240 command_runner.go:130] > Certificate will not expire
	I0318 10:59:19.763834   14240 kubeadm.go:391] StartCluster: {Name:functional-499500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.4 ClusterName:functional-499500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.151.65 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 10:59:19.773919   14240 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 10:59:19.812742   14240 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0318 10:59:19.832481   14240 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0318 10:59:19.832481   14240 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0318 10:59:19.832481   14240 command_runner.go:130] > /var/lib/minikube/etcd:
	I0318 10:59:19.832481   14240 command_runner.go:130] > member
	W0318 10:59:19.832481   14240 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 10:59:19.832481   14240 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 10:59:19.832481   14240 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 10:59:19.844476   14240 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 10:59:19.865610   14240 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 10:59:19.867124   14240 kubeconfig.go:125] found "functional-499500" server: "https://172.25.151.65:8441"
	I0318 10:59:19.868316   14240 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0318 10:59:19.868451   14240 kapi.go:59] client config for functional-499500: &rest.Config{Host:"https://172.25.151.65:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-499500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-499500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x226b2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 10:59:19.870100   14240 cert_rotation.go:137] Starting client certificate rotation controller
	I0318 10:59:19.882112   14240 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 10:59:19.905518   14240 kubeadm.go:624] The running cluster does not require reconfiguration: 172.25.151.65
	I0318 10:59:19.905518   14240 kubeadm.go:1154] stopping kube-system containers ...
	I0318 10:59:19.916148   14240 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 10:59:19.946870   14240 command_runner.go:130] > 848095ad1ab3
	I0318 10:59:19.946870   14240 command_runner.go:130] > bf2f77075c42
	I0318 10:59:19.946870   14240 command_runner.go:130] > 775a09aa04f6
	I0318 10:59:19.946870   14240 command_runner.go:130] > aa0ce1c41429
	I0318 10:59:19.946870   14240 command_runner.go:130] > aab0bdb305c4
	I0318 10:59:19.946870   14240 command_runner.go:130] > 70e7eb023004
	I0318 10:59:19.946870   14240 command_runner.go:130] > 6d29ea5cd609
	I0318 10:59:19.946870   14240 command_runner.go:130] > d3608688bd85
	I0318 10:59:19.946870   14240 command_runner.go:130] > ff1e361f0d67
	I0318 10:59:19.946870   14240 command_runner.go:130] > fe08297df9b1
	I0318 10:59:19.946870   14240 command_runner.go:130] > cee324202f97
	I0318 10:59:19.946870   14240 command_runner.go:130] > d2a94d355f44
	I0318 10:59:19.946870   14240 command_runner.go:130] > b0896172ad0a
	I0318 10:59:19.946870   14240 command_runner.go:130] > 598b6fbfbc84
	I0318 10:59:19.946870   14240 docker.go:483] Stopping containers: [848095ad1ab3 bf2f77075c42 775a09aa04f6 aa0ce1c41429 aab0bdb305c4 70e7eb023004 6d29ea5cd609 d3608688bd85 ff1e361f0d67 fe08297df9b1 cee324202f97 d2a94d355f44 b0896172ad0a 598b6fbfbc84]
	I0318 10:59:19.958132   14240 ssh_runner.go:195] Run: docker stop 848095ad1ab3 bf2f77075c42 775a09aa04f6 aa0ce1c41429 aab0bdb305c4 70e7eb023004 6d29ea5cd609 d3608688bd85 ff1e361f0d67 fe08297df9b1 cee324202f97 d2a94d355f44 b0896172ad0a 598b6fbfbc84
	I0318 10:59:19.988706   14240 command_runner.go:130] > 848095ad1ab3
	I0318 10:59:19.988706   14240 command_runner.go:130] > bf2f77075c42
	I0318 10:59:19.988706   14240 command_runner.go:130] > 775a09aa04f6
	I0318 10:59:19.988706   14240 command_runner.go:130] > aa0ce1c41429
	I0318 10:59:19.988706   14240 command_runner.go:130] > aab0bdb305c4
	I0318 10:59:19.988706   14240 command_runner.go:130] > 70e7eb023004
	I0318 10:59:19.988706   14240 command_runner.go:130] > 6d29ea5cd609
	I0318 10:59:19.988706   14240 command_runner.go:130] > d3608688bd85
	I0318 10:59:19.988706   14240 command_runner.go:130] > ff1e361f0d67
	I0318 10:59:19.988706   14240 command_runner.go:130] > fe08297df9b1
	I0318 10:59:19.988706   14240 command_runner.go:130] > cee324202f97
	I0318 10:59:19.988706   14240 command_runner.go:130] > d2a94d355f44
	I0318 10:59:19.988706   14240 command_runner.go:130] > b0896172ad0a
	I0318 10:59:19.988706   14240 command_runner.go:130] > 598b6fbfbc84
	I0318 10:59:20.000698   14240 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 10:59:20.068563   14240 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 10:59:20.086972   14240 command_runner.go:130] > -rw------- 1 root root 5643 Mar 18 10:56 /etc/kubernetes/admin.conf
	I0318 10:59:20.086972   14240 command_runner.go:130] > -rw------- 1 root root 5653 Mar 18 10:56 /etc/kubernetes/controller-manager.conf
	I0318 10:59:20.086972   14240 command_runner.go:130] > -rw------- 1 root root 2007 Mar 18 10:56 /etc/kubernetes/kubelet.conf
	I0318 10:59:20.086972   14240 command_runner.go:130] > -rw------- 1 root root 5601 Mar 18 10:56 /etc/kubernetes/scheduler.conf
	I0318 10:59:20.086972   14240 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5643 Mar 18 10:56 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Mar 18 10:56 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Mar 18 10:56 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Mar 18 10:56 /etc/kubernetes/scheduler.conf
	
	I0318 10:59:20.099067   14240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0318 10:59:20.116987   14240 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0318 10:59:20.126970   14240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0318 10:59:20.145495   14240 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0318 10:59:20.157296   14240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0318 10:59:20.173593   14240 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0318 10:59:20.188404   14240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 10:59:20.215204   14240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0318 10:59:20.232617   14240 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0318 10:59:20.245624   14240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 10:59:20.276103   14240 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 10:59:20.295355   14240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 10:59:20.380213   14240 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 10:59:20.380213   14240 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0318 10:59:20.380213   14240 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0318 10:59:20.380301   14240 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 10:59:20.380301   14240 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0318 10:59:20.380301   14240 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0318 10:59:20.380373   14240 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0318 10:59:20.380373   14240 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0318 10:59:20.380373   14240 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0318 10:59:20.380373   14240 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 10:59:20.380442   14240 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 10:59:20.380442   14240 command_runner.go:130] > [certs] Using the existing "sa" key
	I0318 10:59:20.380474   14240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 10:59:21.684835   14240 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 10:59:21.684835   14240 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I0318 10:59:21.684835   14240 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I0318 10:59:21.684835   14240 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 10:59:21.684835   14240 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 10:59:21.684835   14240 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.304285s)
	I0318 10:59:21.684835   14240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 10:59:22.017697   14240 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 10:59:22.017697   14240 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 10:59:22.017697   14240 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0318 10:59:22.017697   14240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 10:59:22.127332   14240 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 10:59:22.127332   14240 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 10:59:22.127332   14240 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 10:59:22.127332   14240 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 10:59:22.127434   14240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 10:59:22.234888   14240 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 10:59:22.235042   14240 api_server.go:52] waiting for apiserver process to appear ...
	I0318 10:59:22.246800   14240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 10:59:22.757116   14240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 10:59:23.248623   14240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 10:59:23.760593   14240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 10:59:24.254362   14240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 10:59:24.288463   14240 command_runner.go:130] > 6433
	I0318 10:59:24.288463   14240 api_server.go:72] duration metric: took 2.0534074s to wait for apiserver process to appear ...
	I0318 10:59:24.288463   14240 api_server.go:88] waiting for apiserver healthz status ...
	I0318 10:59:24.288602   14240 api_server.go:253] Checking apiserver healthz at https://172.25.151.65:8441/healthz ...
	I0318 10:59:27.256580   14240 api_server.go:279] https://172.25.151.65:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 10:59:27.256580   14240 api_server.go:103] status: https://172.25.151.65:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 10:59:27.256580   14240 api_server.go:253] Checking apiserver healthz at https://172.25.151.65:8441/healthz ...
	I0318 10:59:27.346793   14240 api_server.go:279] https://172.25.151.65:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 10:59:27.347183   14240 api_server.go:103] status: https://172.25.151.65:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 10:59:27.347183   14240 api_server.go:253] Checking apiserver healthz at https://172.25.151.65:8441/healthz ...
	I0318 10:59:27.374780   14240 api_server.go:279] https://172.25.151.65:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 10:59:27.374780   14240 api_server.go:103] status: https://172.25.151.65:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 10:59:27.791585   14240 api_server.go:253] Checking apiserver healthz at https://172.25.151.65:8441/healthz ...
	I0318 10:59:27.800803   14240 api_server.go:279] https://172.25.151.65:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 10:59:27.800803   14240 api_server.go:103] status: https://172.25.151.65:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 10:59:28.295482   14240 api_server.go:253] Checking apiserver healthz at https://172.25.151.65:8441/healthz ...
	I0318 10:59:28.311619   14240 api_server.go:279] https://172.25.151.65:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 10:59:28.311619   14240 api_server.go:103] status: https://172.25.151.65:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 10:59:28.801142   14240 api_server.go:253] Checking apiserver healthz at https://172.25.151.65:8441/healthz ...
	I0318 10:59:28.809250   14240 api_server.go:279] https://172.25.151.65:8441/healthz returned 200:
	ok
	I0318 10:59:28.809799   14240 round_trippers.go:463] GET https://172.25.151.65:8441/version
	I0318 10:59:28.809799   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:28.809799   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:28.809799   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:28.825899   14240 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0318 10:59:28.825899   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:28.825899   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:28.825899   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:28.825899   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:28.825899   14240 round_trippers.go:580]     Content-Length: 264
	I0318 10:59:28.826343   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:28 GMT
	I0318 10:59:28.826343   14240 round_trippers.go:580]     Audit-Id: 7f200fd9-31de-4c6a-86b5-d1d086c2eadb
	I0318 10:59:28.826343   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:28.826589   14240 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0318 10:59:28.826755   14240 api_server.go:141] control plane version: v1.28.4
	I0318 10:59:28.826755   14240 api_server.go:131] duration metric: took 4.5382638s to wait for apiserver health ...
	I0318 10:59:28.826755   14240 cni.go:84] Creating CNI manager for ""
	I0318 10:59:28.826841   14240 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 10:59:28.829337   14240 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 10:59:28.845237   14240 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 10:59:28.873251   14240 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 10:59:28.921723   14240 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 10:59:28.921975   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods
	I0318 10:59:28.922040   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:28.922040   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:28.922040   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:28.930430   14240 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 10:59:28.930430   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:28.930430   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:28.930430   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:28.930430   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:28.930430   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:28.930517   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:28 GMT
	I0318 10:59:28.930517   14240 round_trippers.go:580]     Audit-Id: 6d3de002-38e4-4e6f-8e4b-8797c24be607
	I0318 10:59:28.931854   14240 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"592"},"items":[{"metadata":{"name":"coredns-5dd5756b68-k65n6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eef8e07c-ce5d-4ef0-a8ec-2266dd920be2","resourceVersion":"586","creationTimestamp":"2024-03-18T10:57:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"17e1b190-888e-4a50-a300-2aa6dc04ffee","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T10:57:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"17e1b190-888e-4a50-a300-2aa6dc04ffee\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 49455 chars]
	I0318 10:59:28.937402   14240 system_pods.go:59] 7 kube-system pods found
	I0318 10:59:28.937445   14240 system_pods.go:61] "coredns-5dd5756b68-k65n6" [eef8e07c-ce5d-4ef0-a8ec-2266dd920be2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 10:59:28.937501   14240 system_pods.go:61] "etcd-functional-499500" [6cf47567-9439-4c90-9a6f-0c703184b674] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 10:59:28.937501   14240 system_pods.go:61] "kube-apiserver-functional-499500" [0c41fd26-6547-4cf0-ac99-344dcef1194a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 10:59:28.937501   14240 system_pods.go:61] "kube-controller-manager-functional-499500" [0d268f61-ae12-4f7c-835d-46539c8014f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 10:59:28.937501   14240 system_pods.go:61] "kube-proxy-rm8c5" [ed875552-beb2-4ba2-9347-76450b017fa2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 10:59:28.937501   14240 system_pods.go:61] "kube-scheduler-functional-499500" [753b8bd0-5da0-4ffd-9b9f-186f575f8392] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 10:59:28.937501   14240 system_pods.go:61] "storage-provisioner" [8c751c63-0f7c-44a2-a6ae-f56aca9513fd] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 10:59:28.937501   14240 system_pods.go:74] duration metric: took 15.7129ms to wait for pod list to return data ...
	I0318 10:59:28.937501   14240 node_conditions.go:102] verifying NodePressure condition ...
	I0318 10:59:28.937501   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/nodes
	I0318 10:59:28.937501   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:28.937501   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:28.937501   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:28.941116   14240 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 10:59:28.941116   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:28.941116   14240 round_trippers.go:580]     Audit-Id: d22ecc61-ae3d-4b61-8dc6-65b2e429c172
	I0318 10:59:28.941116   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:28.941116   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:28.941116   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:28.942142   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:28.942142   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:28 GMT
	I0318 10:59:28.942142   14240 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"592"},"items":[{"metadata":{"name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","resourceVersion":"547","creationTimestamp":"2024-03-18T10:56:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-499500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"functional-499500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T10_56_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4839 chars]
	I0318 10:59:28.943201   14240 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 10:59:28.943201   14240 node_conditions.go:123] node cpu capacity is 2
	I0318 10:59:28.943201   14240 node_conditions.go:105] duration metric: took 5.6995ms to run NodePressure ...
	I0318 10:59:28.943201   14240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 10:59:29.617667   14240 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0318 10:59:29.617667   14240 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0318 10:59:29.617667   14240 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 10:59:29.617667   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0318 10:59:29.618703   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:29.618703   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:29.618703   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:29.629670   14240 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0318 10:59:29.629670   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:29.629670   14240 round_trippers.go:580]     Audit-Id: c8157c44-e536-4da8-a39b-53787c25e863
	I0318 10:59:29.629670   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:29.629670   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:29.629670   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:29.629670   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:29.629670   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:29 GMT
	I0318 10:59:29.629670   14240 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"598"},"items":[{"metadata":{"name":"etcd-functional-499500","namespace":"kube-system","uid":"6cf47567-9439-4c90-9a6f-0c703184b674","resourceVersion":"581","creationTimestamp":"2024-03-18T10:56:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.151.65:2379","kubernetes.io/config.hash":"22a3c8cb2de99f2bf584b7dd6902ce82","kubernetes.io/config.mirror":"22a3c8cb2de99f2bf584b7dd6902ce82","kubernetes.io/config.seen":"2024-03-18T10:56:47.998681810Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T10:56:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio
ns":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f: [truncated 29775 chars]
	I0318 10:59:29.629670   14240 kubeadm.go:733] kubelet initialised
	I0318 10:59:29.629670   14240 kubeadm.go:734] duration metric: took 12.003ms waiting for restarted kubelet to initialise ...
	I0318 10:59:29.629670   14240 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 10:59:29.629670   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods
	I0318 10:59:29.629670   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:29.629670   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:29.629670   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:29.674707   14240 round_trippers.go:574] Response Status: 200 OK in 45 milliseconds
	I0318 10:59:29.675500   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:29.675500   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:29.675500   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:29 GMT
	I0318 10:59:29.675500   14240 round_trippers.go:580]     Audit-Id: bc8c3758-4647-4b0c-a676-4a86b73077ef
	I0318 10:59:29.675500   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:29.675500   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:29.675500   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:29.676649   14240 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"598"},"items":[{"metadata":{"name":"coredns-5dd5756b68-k65n6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eef8e07c-ce5d-4ef0-a8ec-2266dd920be2","resourceVersion":"586","creationTimestamp":"2024-03-18T10:57:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"17e1b190-888e-4a50-a300-2aa6dc04ffee","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T10:57:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"17e1b190-888e-4a50-a300-2aa6dc04ffee\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 49455 chars]
	I0318 10:59:29.679813   14240 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-k65n6" in "kube-system" namespace to be "Ready" ...
	I0318 10:59:29.679833   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-k65n6
	I0318 10:59:29.679977   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:29.679977   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:29.679977   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:29.686831   14240 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 10:59:29.686831   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:29.686831   14240 round_trippers.go:580]     Audit-Id: ff3cccd2-7652-4fa5-889a-6ca264dc9cd0
	I0318 10:59:29.686831   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:29.686831   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:29.686831   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:29.686831   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:29.686831   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:29 GMT
	I0318 10:59:29.694749   14240 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-k65n6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eef8e07c-ce5d-4ef0-a8ec-2266dd920be2","resourceVersion":"586","creationTimestamp":"2024-03-18T10:57:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"17e1b190-888e-4a50-a300-2aa6dc04ffee","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T10:57:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"17e1b190-888e-4a50-a300-2aa6dc04ffee\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6207 chars]
	I0318 10:59:29.695393   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/nodes/functional-499500
	I0318 10:59:29.695451   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:29.695451   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:29.695451   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:29.705823   14240 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0318 10:59:29.706567   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:29.706567   14240 round_trippers.go:580]     Audit-Id: 58b2ff69-8b56-4498-b63b-ded6baddeb6a
	I0318 10:59:29.706644   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:29.706644   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:29.706644   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:29.706644   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:29.706644   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:29 GMT
	I0318 10:59:29.706965   14240 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","resourceVersion":"547","creationTimestamp":"2024-03-18T10:56:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-499500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"functional-499500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T10_56_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T10:56:43Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0318 10:59:30.195911   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-k65n6
	I0318 10:59:30.196006   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:30.196006   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:30.196006   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:30.202512   14240 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 10:59:30.202633   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:30.202633   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:30 GMT
	I0318 10:59:30.202633   14240 round_trippers.go:580]     Audit-Id: 8fa58b6d-2586-40f4-8171-4694ee0c8b9a
	I0318 10:59:30.202633   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:30.202633   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:30.202633   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:30.202633   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:30.202853   14240 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-k65n6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eef8e07c-ce5d-4ef0-a8ec-2266dd920be2","resourceVersion":"586","creationTimestamp":"2024-03-18T10:57:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"17e1b190-888e-4a50-a300-2aa6dc04ffee","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T10:57:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"17e1b190-888e-4a50-a300-2aa6dc04ffee\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6207 chars]
	I0318 10:59:30.203642   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/nodes/functional-499500
	I0318 10:59:30.203699   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:30.203699   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:30.203699   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:30.208516   14240 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 10:59:30.208516   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:30.208516   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:30.208516   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:30 GMT
	I0318 10:59:30.208516   14240 round_trippers.go:580]     Audit-Id: 20a94d9c-d265-4be4-93af-83cebc97d55c
	I0318 10:59:30.208516   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:30.208516   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:30.208516   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:30.208516   14240 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","resourceVersion":"547","creationTimestamp":"2024-03-18T10:56:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-499500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"functional-499500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T10_56_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T10:56:43Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0318 10:59:30.683231   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-k65n6
	I0318 10:59:30.683313   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:30.683313   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:30.683313   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:30.687473   14240 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 10:59:30.688360   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:30.688360   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:30.688360   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:30 GMT
	I0318 10:59:30.688360   14240 round_trippers.go:580]     Audit-Id: 4e29bde9-6aef-484e-823d-615778b13789
	I0318 10:59:30.688360   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:30.688360   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:30.688360   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:30.689019   14240 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-k65n6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eef8e07c-ce5d-4ef0-a8ec-2266dd920be2","resourceVersion":"586","creationTimestamp":"2024-03-18T10:57:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"17e1b190-888e-4a50-a300-2aa6dc04ffee","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T10:57:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"17e1b190-888e-4a50-a300-2aa6dc04ffee\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6207 chars]
	I0318 10:59:30.689737   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/nodes/functional-499500
	I0318 10:59:30.689737   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:30.689737   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:30.689737   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:30.691962   14240 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 10:59:30.691962   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:30.691962   14240 round_trippers.go:580]     Audit-Id: 20a99b88-2882-4723-84a2-a1faf1a4c099
	I0318 10:59:30.691962   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:30.691962   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:30.692953   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:30.692953   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:30.692953   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:30 GMT
	I0318 10:59:30.693341   14240 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","resourceVersion":"547","creationTimestamp":"2024-03-18T10:56:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-499500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"functional-499500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T10_56_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T10:56:43Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0318 10:59:31.183802   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-k65n6
	I0318 10:59:31.183885   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:31.183885   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:31.183885   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:31.188827   14240 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 10:59:31.188827   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:31.188827   14240 round_trippers.go:580]     Audit-Id: fd041d52-2978-4ae1-89d0-c89b9d464d29
	I0318 10:59:31.188827   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:31.188827   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:31.188827   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:31.188827   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:31.188976   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:31 GMT
	I0318 10:59:31.189108   14240 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-k65n6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eef8e07c-ce5d-4ef0-a8ec-2266dd920be2","resourceVersion":"609","creationTimestamp":"2024-03-18T10:57:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"17e1b190-888e-4a50-a300-2aa6dc04ffee","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T10:57:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"17e1b190-888e-4a50-a300-2aa6dc04ffee\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6155 chars]
	I0318 10:59:31.189895   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/nodes/functional-499500
	I0318 10:59:31.189943   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:31.189943   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:31.189943   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:31.193340   14240 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 10:59:31.193340   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:31.193340   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:31.193340   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:31.193340   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:31.193340   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:31 GMT
	I0318 10:59:31.193340   14240 round_trippers.go:580]     Audit-Id: 46e78303-e8d1-4a6b-a7d3-11600e86490c
	I0318 10:59:31.193619   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:31.194003   14240 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","resourceVersion":"547","creationTimestamp":"2024-03-18T10:56:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-499500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"functional-499500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T10_56_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T10:56:43Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0318 10:59:31.194207   14240 pod_ready.go:92] pod "coredns-5dd5756b68-k65n6" in "kube-system" namespace has status "Ready":"True"
	I0318 10:59:31.194207   14240 pod_ready.go:81] duration metric: took 1.514365s for pod "coredns-5dd5756b68-k65n6" in "kube-system" namespace to be "Ready" ...
	I0318 10:59:31.194207   14240 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-499500" in "kube-system" namespace to be "Ready" ...
	I0318 10:59:31.194207   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods/etcd-functional-499500
	I0318 10:59:31.194207   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:31.194207   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:31.194207   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:31.199298   14240 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 10:59:31.199298   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:31.199298   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:31.199298   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:31 GMT
	I0318 10:59:31.199298   14240 round_trippers.go:580]     Audit-Id: 927777e6-8cb9-4648-8f4b-a4f8fbc92c49
	I0318 10:59:31.199298   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:31.199298   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:31.199298   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:31.199298   14240 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-499500","namespace":"kube-system","uid":"6cf47567-9439-4c90-9a6f-0c703184b674","resourceVersion":"581","creationTimestamp":"2024-03-18T10:56:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.151.65:2379","kubernetes.io/config.hash":"22a3c8cb2de99f2bf584b7dd6902ce82","kubernetes.io/config.mirror":"22a3c8cb2de99f2bf584b7dd6902ce82","kubernetes.io/config.seen":"2024-03-18T10:56:47.998681810Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T10:56:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6300 chars]
	I0318 10:59:31.200210   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/nodes/functional-499500
	I0318 10:59:31.200210   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:31.200210   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:31.200210   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:31.203180   14240 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 10:59:31.203504   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:31.203504   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:31 GMT
	I0318 10:59:31.203504   14240 round_trippers.go:580]     Audit-Id: e0dad48e-f816-475e-9917-34cea86aa74e
	I0318 10:59:31.203504   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:31.203504   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:31.203504   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:31.203504   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:31.203672   14240 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","resourceVersion":"547","creationTimestamp":"2024-03-18T10:56:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-499500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"functional-499500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T10_56_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T10:56:43Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0318 10:59:31.695675   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods/etcd-functional-499500
	I0318 10:59:31.695860   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:31.695860   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:31.695860   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:31.700113   14240 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 10:59:31.700113   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:31.700904   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:31.700904   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:31.700904   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:31 GMT
	I0318 10:59:31.700904   14240 round_trippers.go:580]     Audit-Id: 67b5ac94-850c-4bd2-8d9f-3d9c31a984e4
	I0318 10:59:31.700904   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:31.700904   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:31.701153   14240 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-499500","namespace":"kube-system","uid":"6cf47567-9439-4c90-9a6f-0c703184b674","resourceVersion":"581","creationTimestamp":"2024-03-18T10:56:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.151.65:2379","kubernetes.io/config.hash":"22a3c8cb2de99f2bf584b7dd6902ce82","kubernetes.io/config.mirror":"22a3c8cb2de99f2bf584b7dd6902ce82","kubernetes.io/config.seen":"2024-03-18T10:56:47.998681810Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T10:56:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6300 chars]
	I0318 10:59:31.701703   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/nodes/functional-499500
	I0318 10:59:31.701703   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:31.701703   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:31.701703   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:31.705339   14240 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 10:59:31.705460   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:31.705460   14240 round_trippers.go:580]     Audit-Id: 25ca8da6-fbe5-49c0-9b0c-4e124996fc70
	I0318 10:59:31.705460   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:31.705568   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:31.705568   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:31.705568   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:31.705568   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:31 GMT
	I0318 10:59:31.705748   14240 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","resourceVersion":"547","creationTimestamp":"2024-03-18T10:56:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-499500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"functional-499500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T10_56_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T10:56:43Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0318 10:59:32.194404   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods/etcd-functional-499500
	I0318 10:59:32.194404   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:32.194404   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:32.194404   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:32.199610   14240 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 10:59:32.199610   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:32.199610   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:32.199610   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:32.199610   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:32.199610   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:32.199610   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:32 GMT
	I0318 10:59:32.199610   14240 round_trippers.go:580]     Audit-Id: cf32d6ae-7312-4903-88bf-4d7c3d1045ef
	I0318 10:59:32.200093   14240 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-499500","namespace":"kube-system","uid":"6cf47567-9439-4c90-9a6f-0c703184b674","resourceVersion":"581","creationTimestamp":"2024-03-18T10:56:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.151.65:2379","kubernetes.io/config.hash":"22a3c8cb2de99f2bf584b7dd6902ce82","kubernetes.io/config.mirror":"22a3c8cb2de99f2bf584b7dd6902ce82","kubernetes.io/config.seen":"2024-03-18T10:56:47.998681810Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T10:56:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6300 chars]
	I0318 10:59:32.200989   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/nodes/functional-499500
	I0318 10:59:32.200989   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:32.201119   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:32.201136   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:32.203425   14240 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 10:59:32.203425   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:32.203425   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:32.203425   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:32.204242   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:32 GMT
	I0318 10:59:32.204242   14240 round_trippers.go:580]     Audit-Id: b73fbed2-a690-46ec-8c18-85b0a9ea1b4b
	I0318 10:59:32.204242   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:32.204242   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:32.204892   14240 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","resourceVersion":"547","creationTimestamp":"2024-03-18T10:56:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-499500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"functional-499500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T10_56_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T10:56:43Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0318 10:59:32.697218   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods/etcd-functional-499500
	I0318 10:59:32.697274   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:32.697274   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:32.697330   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:32.701622   14240 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 10:59:32.702470   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:32.702470   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:32.702470   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:32.702470   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:32.702470   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:32.702470   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:32 GMT
	I0318 10:59:32.702470   14240 round_trippers.go:580]     Audit-Id: 1fddd2f7-ae82-4490-9d55-fd1c0ae342df
	I0318 10:59:32.703005   14240 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-499500","namespace":"kube-system","uid":"6cf47567-9439-4c90-9a6f-0c703184b674","resourceVersion":"581","creationTimestamp":"2024-03-18T10:56:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.151.65:2379","kubernetes.io/config.hash":"22a3c8cb2de99f2bf584b7dd6902ce82","kubernetes.io/config.mirror":"22a3c8cb2de99f2bf584b7dd6902ce82","kubernetes.io/config.seen":"2024-03-18T10:56:47.998681810Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T10:56:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6300 chars]
	I0318 10:59:32.703929   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/nodes/functional-499500
	I0318 10:59:32.703999   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:32.703999   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:32.703999   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:32.707474   14240 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 10:59:32.708101   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:32.708101   14240 round_trippers.go:580]     Audit-Id: 9193247d-999d-44a1-a2d7-87f469161db4
	I0318 10:59:32.708101   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:32.708101   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:32.708101   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:32.708101   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:32.708101   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:32 GMT
	I0318 10:59:32.708441   14240 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","resourceVersion":"547","creationTimestamp":"2024-03-18T10:56:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-499500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"functional-499500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T10_56_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T10:56:43Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0318 10:59:33.210033   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods/etcd-functional-499500
	I0318 10:59:33.210192   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:33.210192   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:33.210192   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:33.215201   14240 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 10:59:33.215201   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:33.215343   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:33.215343   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:33.215343   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:33.215343   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:33.215343   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:33 GMT
	I0318 10:59:33.215343   14240 round_trippers.go:580]     Audit-Id: bab3ad06-1aed-493b-b8a4-f09c75aff572
	I0318 10:59:33.215640   14240 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-499500","namespace":"kube-system","uid":"6cf47567-9439-4c90-9a6f-0c703184b674","resourceVersion":"581","creationTimestamp":"2024-03-18T10:56:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.151.65:2379","kubernetes.io/config.hash":"22a3c8cb2de99f2bf584b7dd6902ce82","kubernetes.io/config.mirror":"22a3c8cb2de99f2bf584b7dd6902ce82","kubernetes.io/config.seen":"2024-03-18T10:56:47.998681810Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T10:56:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6300 chars]
	I0318 10:59:33.215761   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/nodes/functional-499500
	I0318 10:59:33.215761   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:33.215761   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:33.215761   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:33.220577   14240 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 10:59:33.221262   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:33.221262   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:33 GMT
	I0318 10:59:33.221262   14240 round_trippers.go:580]     Audit-Id: a93e3790-e9b9-4e1e-966e-6e01602645cc
	I0318 10:59:33.221262   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:33.221262   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:33.221262   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:33.221262   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:33.221924   14240 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","resourceVersion":"547","creationTimestamp":"2024-03-18T10:56:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-499500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"functional-499500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T10_56_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T10:56:43Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0318 10:59:33.221954   14240 pod_ready.go:102] pod "etcd-functional-499500" in "kube-system" namespace has status "Ready":"False"
	I0318 10:59:33.695000   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods/etcd-functional-499500
	I0318 10:59:33.695148   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:33.695148   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:33.695148   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:33.699554   14240 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 10:59:33.699554   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:33.699554   14240 round_trippers.go:580]     Audit-Id: 197e63b3-b71b-4bce-8dd2-81fd58f5612d
	I0318 10:59:33.699990   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:33.699990   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:33.699990   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:33.699990   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:33.699990   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:33 GMT
	I0318 10:59:33.700103   14240 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-499500","namespace":"kube-system","uid":"6cf47567-9439-4c90-9a6f-0c703184b674","resourceVersion":"581","creationTimestamp":"2024-03-18T10:56:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.151.65:2379","kubernetes.io/config.hash":"22a3c8cb2de99f2bf584b7dd6902ce82","kubernetes.io/config.mirror":"22a3c8cb2de99f2bf584b7dd6902ce82","kubernetes.io/config.seen":"2024-03-18T10:56:47.998681810Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T10:56:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6300 chars]
	I0318 10:59:33.700460   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/nodes/functional-499500
	I0318 10:59:33.700460   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:33.700460   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:33.700460   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:33.704030   14240 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 10:59:33.704330   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:33.704330   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:33.704330   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:33.704330   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:33.704330   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:33.704330   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:33 GMT
	I0318 10:59:33.704330   14240 round_trippers.go:580]     Audit-Id: f95638b3-6a37-4f01-9888-0b4437284503
	I0318 10:59:33.705043   14240 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","resourceVersion":"547","creationTimestamp":"2024-03-18T10:56:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-499500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"functional-499500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T10_56_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T10:56:43Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0318 10:59:34.196351   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods/etcd-functional-499500
	I0318 10:59:34.196422   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:34.196422   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:34.196422   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:34.202975   14240 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 10:59:34.202975   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:34.202975   14240 round_trippers.go:580]     Audit-Id: 6465393e-d57e-43cf-b9ee-359d13b90cf7
	I0318 10:59:34.202975   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:34.202975   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:34.202975   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:34.202975   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:34.202975   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:34 GMT
	I0318 10:59:34.203558   14240 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-499500","namespace":"kube-system","uid":"6cf47567-9439-4c90-9a6f-0c703184b674","resourceVersion":"581","creationTimestamp":"2024-03-18T10:56:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.151.65:2379","kubernetes.io/config.hash":"22a3c8cb2de99f2bf584b7dd6902ce82","kubernetes.io/config.mirror":"22a3c8cb2de99f2bf584b7dd6902ce82","kubernetes.io/config.seen":"2024-03-18T10:56:47.998681810Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T10:56:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6300 chars]
	I0318 10:59:34.203769   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/nodes/functional-499500
	I0318 10:59:34.203769   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:34.203769   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:34.203769   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:34.207601   14240 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 10:59:34.207601   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:34.207601   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:34.207601   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:34.207601   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:34.207601   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:34 GMT
	I0318 10:59:34.207601   14240 round_trippers.go:580]     Audit-Id: eb2aa80b-db38-431e-ab9c-5ba1cefc8e7b
	I0318 10:59:34.207601   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:34.208275   14240 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","resourceVersion":"547","creationTimestamp":"2024-03-18T10:56:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-499500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"functional-499500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T10_56_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T10:56:43Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0318 10:59:34.708916   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods/etcd-functional-499500
	I0318 10:59:34.708916   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:34.708916   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:34.708916   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:34.713569   14240 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 10:59:34.713569   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:34.714055   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:34.714055   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:34.714055   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:34.714055   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:34 GMT
	I0318 10:59:34.714055   14240 round_trippers.go:580]     Audit-Id: bd6805c7-9fe9-4a2d-83d8-47213fa86a41
	I0318 10:59:34.714055   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:34.714287   14240 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-499500","namespace":"kube-system","uid":"6cf47567-9439-4c90-9a6f-0c703184b674","resourceVersion":"581","creationTimestamp":"2024-03-18T10:56:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.151.65:2379","kubernetes.io/config.hash":"22a3c8cb2de99f2bf584b7dd6902ce82","kubernetes.io/config.mirror":"22a3c8cb2de99f2bf584b7dd6902ce82","kubernetes.io/config.seen":"2024-03-18T10:56:47.998681810Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T10:56:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6300 chars]
	I0318 10:59:34.715029   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/nodes/functional-499500
	I0318 10:59:34.715029   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:34.715029   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:34.715108   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:34.719053   14240 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 10:59:34.719053   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:34.719053   14240 round_trippers.go:580]     Audit-Id: 3bfc8ee9-ba9e-4448-8cac-a44b73ef721a
	I0318 10:59:34.719053   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:34.719053   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:34.719053   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:34.719053   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:34.719053   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:34 GMT
	I0318 10:59:34.719053   14240 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","resourceVersion":"547","creationTimestamp":"2024-03-18T10:56:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-499500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"functional-499500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T10_56_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T10:56:43Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0318 10:59:35.207780   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods/etcd-functional-499500
	I0318 10:59:35.207780   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:35.207780   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:35.207780   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:35.212704   14240 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 10:59:35.212704   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:35.212704   14240 round_trippers.go:580]     Audit-Id: 2f60f5cd-f81b-4600-9767-431b4def3ebf
	I0318 10:59:35.212704   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:35.212704   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:35.212704   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:35.212704   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:35.212704   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:35 GMT
	I0318 10:59:35.212877   14240 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-499500","namespace":"kube-system","uid":"6cf47567-9439-4c90-9a6f-0c703184b674","resourceVersion":"581","creationTimestamp":"2024-03-18T10:56:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.151.65:2379","kubernetes.io/config.hash":"22a3c8cb2de99f2bf584b7dd6902ce82","kubernetes.io/config.mirror":"22a3c8cb2de99f2bf584b7dd6902ce82","kubernetes.io/config.seen":"2024-03-18T10:56:47.998681810Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T10:56:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6300 chars]
	I0318 10:59:35.213663   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/nodes/functional-499500
	I0318 10:59:35.213736   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:35.213736   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:35.213736   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:35.217219   14240 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 10:59:35.217300   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:35.217300   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:35 GMT
	I0318 10:59:35.217300   14240 round_trippers.go:580]     Audit-Id: 8c40872b-6ad9-40e2-8a6d-023938c06ff0
	I0318 10:59:35.217300   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:35.217300   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:35.217300   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:35.217411   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:35.217714   14240 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","resourceVersion":"547","creationTimestamp":"2024-03-18T10:56:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-499500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"functional-499500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T10_56_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T10:56:43Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0318 10:59:35.706483   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods/etcd-functional-499500
	I0318 10:59:35.706483   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:35.706580   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:35.706580   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:35.710622   14240 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 10:59:35.710622   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:35.710622   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:35.710622   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:35.711174   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:35.711174   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:35.711174   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:35 GMT
	I0318 10:59:35.711174   14240 round_trippers.go:580]     Audit-Id: ecf53b69-fa9c-40dc-b1cc-d319eae66a70
	I0318 10:59:35.712058   14240 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-499500","namespace":"kube-system","uid":"6cf47567-9439-4c90-9a6f-0c703184b674","resourceVersion":"581","creationTimestamp":"2024-03-18T10:56:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.151.65:2379","kubernetes.io/config.hash":"22a3c8cb2de99f2bf584b7dd6902ce82","kubernetes.io/config.mirror":"22a3c8cb2de99f2bf584b7dd6902ce82","kubernetes.io/config.seen":"2024-03-18T10:56:47.998681810Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T10:56:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6300 chars]
	I0318 10:59:35.712819   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/nodes/functional-499500
	I0318 10:59:35.712819   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:35.712819   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:35.712819   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:35.715480   14240 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 10:59:35.715480   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:35.715480   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:35.715480   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:35.715480   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:35.715480   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:35.715480   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:35 GMT
	I0318 10:59:35.715480   14240 round_trippers.go:580]     Audit-Id: a58925d4-ae5b-472d-9a23-78718106d7d8
	I0318 10:59:35.716564   14240 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","resourceVersion":"547","creationTimestamp":"2024-03-18T10:56:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-499500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"functional-499500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T10_56_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T10:56:43Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0318 10:59:35.717099   14240 pod_ready.go:102] pod "etcd-functional-499500" in "kube-system" namespace has status "Ready":"False"
	I0318 10:59:36.205731   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods/etcd-functional-499500
	I0318 10:59:36.205792   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:36.205792   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:36.205792   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:36.210328   14240 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 10:59:36.210328   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:36.210328   14240 round_trippers.go:580]     Audit-Id: b53808b6-611d-4e16-ada3-d67b4d179d89
	I0318 10:59:36.210328   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:36.210328   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:36.210328   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:36.210328   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:36.210328   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:36 GMT
	I0318 10:59:36.210591   14240 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-499500","namespace":"kube-system","uid":"6cf47567-9439-4c90-9a6f-0c703184b674","resourceVersion":"581","creationTimestamp":"2024-03-18T10:56:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.151.65:2379","kubernetes.io/config.hash":"22a3c8cb2de99f2bf584b7dd6902ce82","kubernetes.io/config.mirror":"22a3c8cb2de99f2bf584b7dd6902ce82","kubernetes.io/config.seen":"2024-03-18T10:56:47.998681810Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T10:56:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6300 chars]
	I0318 10:59:36.210824   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/nodes/functional-499500
	I0318 10:59:36.211366   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:36.211366   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:36.211366   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:36.214631   14240 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 10:59:36.215626   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:36.215626   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:36 GMT
	I0318 10:59:36.215626   14240 round_trippers.go:580]     Audit-Id: 1fab8618-8028-44b6-bafa-e147921b0814
	I0318 10:59:36.215626   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:36.215626   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:36.215626   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:36.215681   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:36.215681   14240 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","resourceVersion":"547","creationTimestamp":"2024-03-18T10:56:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-499500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"functional-499500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T10_56_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T10:56:43Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0318 10:59:36.703554   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods/etcd-functional-499500
	I0318 10:59:36.703554   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:36.703554   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:36.703554   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:36.708846   14240 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 10:59:36.708846   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:36.708950   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:36.709015   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:36.709015   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:36.709015   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:36.709015   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:36 GMT
	I0318 10:59:36.709015   14240 round_trippers.go:580]     Audit-Id: 4f5ddf90-4b09-4d8f-9ad5-645c733483d2
	I0318 10:59:36.709289   14240 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-499500","namespace":"kube-system","uid":"6cf47567-9439-4c90-9a6f-0c703184b674","resourceVersion":"581","creationTimestamp":"2024-03-18T10:56:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.151.65:2379","kubernetes.io/config.hash":"22a3c8cb2de99f2bf584b7dd6902ce82","kubernetes.io/config.mirror":"22a3c8cb2de99f2bf584b7dd6902ce82","kubernetes.io/config.seen":"2024-03-18T10:56:47.998681810Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T10:56:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6300 chars]
	I0318 10:59:36.710078   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/nodes/functional-499500
	I0318 10:59:36.710157   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:36.710157   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:36.710157   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:36.713069   14240 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 10:59:36.713069   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:36.713436   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:36 GMT
	I0318 10:59:36.713436   14240 round_trippers.go:580]     Audit-Id: 432bebd1-a526-4758-bf66-d68e0cc49da1
	I0318 10:59:36.713436   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:36.713436   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:36.713436   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:36.713436   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:36.713806   14240 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","resourceVersion":"547","creationTimestamp":"2024-03-18T10:56:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-499500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"functional-499500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T10_56_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T10:56:43Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0318 10:59:37.201889   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods/etcd-functional-499500
	I0318 10:59:37.201889   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:37.201889   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:37.201889   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:37.207076   14240 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 10:59:37.207357   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:37.207357   14240 round_trippers.go:580]     Audit-Id: 653cdd3b-0c24-4d0f-81ea-a508695448d6
	I0318 10:59:37.207357   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:37.207357   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:37.207357   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:37.207357   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:37.207428   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:37 GMT
	I0318 10:59:37.207626   14240 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-499500","namespace":"kube-system","uid":"6cf47567-9439-4c90-9a6f-0c703184b674","resourceVersion":"581","creationTimestamp":"2024-03-18T10:56:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.151.65:2379","kubernetes.io/config.hash":"22a3c8cb2de99f2bf584b7dd6902ce82","kubernetes.io/config.mirror":"22a3c8cb2de99f2bf584b7dd6902ce82","kubernetes.io/config.seen":"2024-03-18T10:56:47.998681810Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T10:56:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6300 chars]
	I0318 10:59:37.208535   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/nodes/functional-499500
	I0318 10:59:37.208535   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:37.208535   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:37.208535   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:37.213414   14240 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 10:59:37.213414   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:37.213414   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:37 GMT
	I0318 10:59:37.213414   14240 round_trippers.go:580]     Audit-Id: 06c53318-da95-4bd5-ac30-7d8cb359749f
	I0318 10:59:37.213414   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:37.213414   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:37.213414   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:37.213414   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:37.215788   14240 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","resourceVersion":"547","creationTimestamp":"2024-03-18T10:56:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-499500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"functional-499500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T10_56_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T10:56:43Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0318 10:59:37.703653   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods/etcd-functional-499500
	I0318 10:59:37.703653   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:37.703653   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:37.703653   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:37.707248   14240 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 10:59:37.707248   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:37.707248   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:37 GMT
	I0318 10:59:37.707248   14240 round_trippers.go:580]     Audit-Id: db30e41e-bfa7-4264-9d30-33b54a21e69e
	I0318 10:59:37.707248   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:37.707248   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:37.707248   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:37.707248   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:37.707248   14240 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-499500","namespace":"kube-system","uid":"6cf47567-9439-4c90-9a6f-0c703184b674","resourceVersion":"581","creationTimestamp":"2024-03-18T10:56:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.151.65:2379","kubernetes.io/config.hash":"22a3c8cb2de99f2bf584b7dd6902ce82","kubernetes.io/config.mirror":"22a3c8cb2de99f2bf584b7dd6902ce82","kubernetes.io/config.seen":"2024-03-18T10:56:47.998681810Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T10:56:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6300 chars]
	I0318 10:59:37.709862   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/nodes/functional-499500
	I0318 10:59:37.709919   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:37.709919   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:37.709919   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:37.712789   14240 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 10:59:37.713132   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:37.713132   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:37.713132   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:37.713132   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:37.713132   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:37.713132   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:37 GMT
	I0318 10:59:37.713180   14240 round_trippers.go:580]     Audit-Id: 05ae9aed-1791-4b2e-be02-ecabac4132c8
	I0318 10:59:37.713230   14240 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","resourceVersion":"547","creationTimestamp":"2024-03-18T10:56:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-499500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"functional-499500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T10_56_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T10:56:43Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0318 10:59:38.202089   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods/etcd-functional-499500
	I0318 10:59:38.202089   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:38.202089   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:38.202089   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:38.207662   14240 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 10:59:38.207662   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:38.207662   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:38.207662   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:38.207662   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:38.207662   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:38.207662   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:38 GMT
	I0318 10:59:38.207662   14240 round_trippers.go:580]     Audit-Id: 376bf7c3-6e5c-4700-b980-0de16f0f6ed4
	I0318 10:59:38.207662   14240 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-499500","namespace":"kube-system","uid":"6cf47567-9439-4c90-9a6f-0c703184b674","resourceVersion":"581","creationTimestamp":"2024-03-18T10:56:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.151.65:2379","kubernetes.io/config.hash":"22a3c8cb2de99f2bf584b7dd6902ce82","kubernetes.io/config.mirror":"22a3c8cb2de99f2bf584b7dd6902ce82","kubernetes.io/config.seen":"2024-03-18T10:56:47.998681810Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T10:56:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6300 chars]
	I0318 10:59:38.208660   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/nodes/functional-499500
	I0318 10:59:38.208660   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:38.208660   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:38.208660   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:38.212518   14240 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 10:59:38.212518   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:38.212518   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:38.212518   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:38.212518   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:38.212518   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:38 GMT
	I0318 10:59:38.212518   14240 round_trippers.go:580]     Audit-Id: ff1fb3c6-40db-44ea-be20-31aae7a44205
	I0318 10:59:38.212518   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:38.212518   14240 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","resourceVersion":"547","creationTimestamp":"2024-03-18T10:56:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-499500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"functional-499500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T10_56_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T10:56:43Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0318 10:59:38.213383   14240 pod_ready.go:102] pod "etcd-functional-499500" in "kube-system" namespace has status "Ready":"False"
	I0318 10:59:38.701044   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods/etcd-functional-499500
	I0318 10:59:38.701139   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:38.701139   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:38.701139   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:38.706512   14240 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 10:59:38.706512   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:38.706512   14240 round_trippers.go:580]     Audit-Id: a02aee5f-83e2-473e-8166-df1ade1b2942
	I0318 10:59:38.706512   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:38.706512   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:38.706512   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:38.706512   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:38.706512   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:38 GMT
	I0318 10:59:38.706512   14240 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-499500","namespace":"kube-system","uid":"6cf47567-9439-4c90-9a6f-0c703184b674","resourceVersion":"581","creationTimestamp":"2024-03-18T10:56:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.151.65:2379","kubernetes.io/config.hash":"22a3c8cb2de99f2bf584b7dd6902ce82","kubernetes.io/config.mirror":"22a3c8cb2de99f2bf584b7dd6902ce82","kubernetes.io/config.seen":"2024-03-18T10:56:47.998681810Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T10:56:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6300 chars]
	I0318 10:59:38.707634   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/nodes/functional-499500
	I0318 10:59:38.707634   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:38.707634   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:38.707634   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:38.711262   14240 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 10:59:38.711262   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:38.711262   14240 round_trippers.go:580]     Audit-Id: 024ed372-ccb5-4fd9-8b95-19a81be0feca
	I0318 10:59:38.711262   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:38.711262   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:38.711262   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:38.711986   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:38.711986   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:38 GMT
	I0318 10:59:38.712252   14240 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","resourceVersion":"547","creationTimestamp":"2024-03-18T10:56:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-499500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"functional-499500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T10_56_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T10:56:43Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0318 10:59:39.200302   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods/etcd-functional-499500
	I0318 10:59:39.200302   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:39.200302   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:39.201476   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:39.207740   14240 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 10:59:39.208669   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:39.208669   14240 round_trippers.go:580]     Audit-Id: eaffd096-6bba-46ee-a989-765969c2f21b
	I0318 10:59:39.208708   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:39.208708   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:39.208708   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:39.208708   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:39.208778   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:39 GMT
	I0318 10:59:39.208930   14240 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-499500","namespace":"kube-system","uid":"6cf47567-9439-4c90-9a6f-0c703184b674","resourceVersion":"581","creationTimestamp":"2024-03-18T10:56:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.151.65:2379","kubernetes.io/config.hash":"22a3c8cb2de99f2bf584b7dd6902ce82","kubernetes.io/config.mirror":"22a3c8cb2de99f2bf584b7dd6902ce82","kubernetes.io/config.seen":"2024-03-18T10:56:47.998681810Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T10:56:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6300 chars]
	I0318 10:59:39.209682   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/nodes/functional-499500
	I0318 10:59:39.209682   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:39.209682   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:39.209682   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:39.214014   14240 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 10:59:39.214014   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:39.214014   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:39.214014   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:39.214014   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:39.214014   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:39 GMT
	I0318 10:59:39.214014   14240 round_trippers.go:580]     Audit-Id: 4205c9b5-a17a-4665-a191-834f6b72a1be
	I0318 10:59:39.214014   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:39.214014   14240 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","resourceVersion":"547","creationTimestamp":"2024-03-18T10:56:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-499500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"functional-499500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T10_56_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T10:56:43Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0318 10:59:39.697370   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods/etcd-functional-499500
	I0318 10:59:39.697370   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:39.697370   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:39.697370   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:39.702801   14240 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 10:59:39.702801   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:39.703141   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:39.703141   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:39.703141   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:39.703141   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:39.703141   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:39 GMT
	I0318 10:59:39.703141   14240 round_trippers.go:580]     Audit-Id: a5e241b8-22dc-4b34-a622-05aad0ea4d20
	I0318 10:59:39.703367   14240 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-499500","namespace":"kube-system","uid":"6cf47567-9439-4c90-9a6f-0c703184b674","resourceVersion":"581","creationTimestamp":"2024-03-18T10:56:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.151.65:2379","kubernetes.io/config.hash":"22a3c8cb2de99f2bf584b7dd6902ce82","kubernetes.io/config.mirror":"22a3c8cb2de99f2bf584b7dd6902ce82","kubernetes.io/config.seen":"2024-03-18T10:56:47.998681810Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T10:56:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6300 chars]
	I0318 10:59:39.703709   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/nodes/functional-499500
	I0318 10:59:39.703709   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:39.703709   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:39.703709   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:39.707347   14240 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 10:59:39.708335   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:39.708335   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:39.708335   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:39.708335   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:39 GMT
	I0318 10:59:39.708335   14240 round_trippers.go:580]     Audit-Id: 22a004e3-42de-47b6-8d80-ce1dfaa635ec
	I0318 10:59:39.708335   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:39.708335   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:39.708725   14240 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","resourceVersion":"547","creationTimestamp":"2024-03-18T10:56:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-499500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"functional-499500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T10_56_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T10:56:43Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0318 10:59:40.199507   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods/etcd-functional-499500
	I0318 10:59:40.199563   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:40.199563   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:40.199563   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:40.205030   14240 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 10:59:40.205030   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:40.205030   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:40.205273   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:40.205273   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:40.205273   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:40 GMT
	I0318 10:59:40.205273   14240 round_trippers.go:580]     Audit-Id: 22829625-c856-428f-a545-7ff88310a437
	I0318 10:59:40.205273   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:40.205619   14240 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-499500","namespace":"kube-system","uid":"6cf47567-9439-4c90-9a6f-0c703184b674","resourceVersion":"581","creationTimestamp":"2024-03-18T10:56:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.151.65:2379","kubernetes.io/config.hash":"22a3c8cb2de99f2bf584b7dd6902ce82","kubernetes.io/config.mirror":"22a3c8cb2de99f2bf584b7dd6902ce82","kubernetes.io/config.seen":"2024-03-18T10:56:47.998681810Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T10:56:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6300 chars]
	I0318 10:59:40.206747   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/nodes/functional-499500
	I0318 10:59:40.206747   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:40.206802   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:40.206866   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:40.209658   14240 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 10:59:40.209658   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:40.209658   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:40.209658   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:40 GMT
	I0318 10:59:40.209658   14240 round_trippers.go:580]     Audit-Id: d4c0d570-2baa-4a17-a3b5-82720553ba2b
	I0318 10:59:40.209658   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:40.209658   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:40.209658   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:40.210656   14240 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","resourceVersion":"547","creationTimestamp":"2024-03-18T10:56:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-499500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"functional-499500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T10_56_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T10:56:43Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0318 10:59:40.704157   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods/etcd-functional-499500
	I0318 10:59:40.704157   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:40.704157   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:40.704157   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:40.707740   14240 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 10:59:40.707740   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:40.708230   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:40.708230   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:40.708230   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:40.708230   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:40 GMT
	I0318 10:59:40.708230   14240 round_trippers.go:580]     Audit-Id: fc08baf4-8660-44cd-99bc-e26f0f6807e4
	I0318 10:59:40.708230   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:40.708441   14240 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-499500","namespace":"kube-system","uid":"6cf47567-9439-4c90-9a6f-0c703184b674","resourceVersion":"617","creationTimestamp":"2024-03-18T10:56:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.151.65:2379","kubernetes.io/config.hash":"22a3c8cb2de99f2bf584b7dd6902ce82","kubernetes.io/config.mirror":"22a3c8cb2de99f2bf584b7dd6902ce82","kubernetes.io/config.seen":"2024-03-18T10:56:47.998681810Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T10:56:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6076 chars]
	I0318 10:59:40.708704   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/nodes/functional-499500
	I0318 10:59:40.709250   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:40.709250   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:40.709250   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:40.713200   14240 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 10:59:40.713200   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:40.713200   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:40.713200   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:40.713200   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:40.713200   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:40.713676   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:40 GMT
	I0318 10:59:40.713676   14240 round_trippers.go:580]     Audit-Id: 6d0358d4-a988-423c-8804-03e02c08269f
	I0318 10:59:40.714031   14240 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","resourceVersion":"547","creationTimestamp":"2024-03-18T10:56:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-499500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"functional-499500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T10_56_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T10:56:43Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0318 10:59:40.714520   14240 pod_ready.go:92] pod "etcd-functional-499500" in "kube-system" namespace has status "Ready":"True"
	I0318 10:59:40.714594   14240 pod_ready.go:81] duration metric: took 9.5203265s for pod "etcd-functional-499500" in "kube-system" namespace to be "Ready" ...
	I0318 10:59:40.714594   14240 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-499500" in "kube-system" namespace to be "Ready" ...
	I0318 10:59:40.714678   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-499500
	I0318 10:59:40.714748   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:40.714748   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:40.714748   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:40.717948   14240 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 10:59:40.717948   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:40.717948   14240 round_trippers.go:580]     Audit-Id: d7ad8883-78a5-41bf-acfd-627d61bb27b7
	I0318 10:59:40.718305   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:40.718305   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:40.718305   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:40.718305   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:40.718305   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:40 GMT
	I0318 10:59:40.718681   14240 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-499500","namespace":"kube-system","uid":"0c41fd26-6547-4cf0-ac99-344dcef1194a","resourceVersion":"582","creationTimestamp":"2024-03-18T10:56:46Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.25.151.65:8441","kubernetes.io/config.hash":"8b83d512506d3345291ef83d5b9e76e6","kubernetes.io/config.mirror":"8b83d512506d3345291ef83d5b9e76e6","kubernetes.io/config.seen":"2024-03-18T10:56:38.158338711Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T10:56:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7853 chars]
	I0318 10:59:40.719524   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/nodes/functional-499500
	I0318 10:59:40.719524   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:40.719524   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:40.719524   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:40.722115   14240 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 10:59:40.722115   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:40.722115   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:40.722115   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:40 GMT
	I0318 10:59:40.722115   14240 round_trippers.go:580]     Audit-Id: 94d377be-906a-4558-b990-fbee469cc823
	I0318 10:59:40.722115   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:40.722115   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:40.722115   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:40.723120   14240 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","resourceVersion":"547","creationTimestamp":"2024-03-18T10:56:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-499500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"functional-499500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T10_56_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T10:56:43Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0318 10:59:41.218591   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-499500
	I0318 10:59:41.218673   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:41.218673   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:41.218673   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:41.222919   14240 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 10:59:41.222919   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:41.222919   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:41.222919   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:41.222919   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:41.222919   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:41.222919   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:41 GMT
	I0318 10:59:41.222919   14240 round_trippers.go:580]     Audit-Id: e6c0e829-9dbc-4fe5-9766-54178c780045
	I0318 10:59:41.223924   14240 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-499500","namespace":"kube-system","uid":"0c41fd26-6547-4cf0-ac99-344dcef1194a","resourceVersion":"582","creationTimestamp":"2024-03-18T10:56:46Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.25.151.65:8441","kubernetes.io/config.hash":"8b83d512506d3345291ef83d5b9e76e6","kubernetes.io/config.mirror":"8b83d512506d3345291ef83d5b9e76e6","kubernetes.io/config.seen":"2024-03-18T10:56:38.158338711Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T10:56:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7853 chars]
	I0318 10:59:41.224686   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/nodes/functional-499500
	I0318 10:59:41.224686   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:41.224686   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:41.224686   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:41.229151   14240 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 10:59:41.229151   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:41.229151   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:41 GMT
	I0318 10:59:41.229151   14240 round_trippers.go:580]     Audit-Id: f5266990-ef8f-4a08-b4f1-2cfa4288b642
	I0318 10:59:41.229151   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:41.229151   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:41.229151   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:41.229151   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:41.229476   14240 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","resourceVersion":"547","creationTimestamp":"2024-03-18T10:56:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-499500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"functional-499500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T10_56_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T10:56:43Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0318 10:59:41.720719   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-499500
	I0318 10:59:41.721224   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:41.721224   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:41.721224   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:41.725685   14240 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 10:59:41.725685   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:41.725685   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:41.725963   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:41.725963   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:41.725963   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:41 GMT
	I0318 10:59:41.725963   14240 round_trippers.go:580]     Audit-Id: 9d2bb0c9-ad13-4f2e-9bb6-a9b826c661e1
	I0318 10:59:41.725963   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:41.726395   14240 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-499500","namespace":"kube-system","uid":"0c41fd26-6547-4cf0-ac99-344dcef1194a","resourceVersion":"623","creationTimestamp":"2024-03-18T10:56:46Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.25.151.65:8441","kubernetes.io/config.hash":"8b83d512506d3345291ef83d5b9e76e6","kubernetes.io/config.mirror":"8b83d512506d3345291ef83d5b9e76e6","kubernetes.io/config.seen":"2024-03-18T10:56:38.158338711Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T10:56:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7609 chars]
	I0318 10:59:41.727036   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/nodes/functional-499500
	I0318 10:59:41.727036   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:41.727036   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:41.727036   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:41.730354   14240 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 10:59:41.730354   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:41.730354   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:41.730354   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:41.730354   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:41.730354   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:41 GMT
	I0318 10:59:41.730354   14240 round_trippers.go:580]     Audit-Id: fe4792bb-a86d-4dda-a62d-5a79774d49d4
	I0318 10:59:41.730354   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:41.731669   14240 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","resourceVersion":"547","creationTimestamp":"2024-03-18T10:56:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-499500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"functional-499500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T10_56_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T10:56:43Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0318 10:59:41.732202   14240 pod_ready.go:92] pod "kube-apiserver-functional-499500" in "kube-system" namespace has status "Ready":"True"
	I0318 10:59:41.732202   14240 pod_ready.go:81] duration metric: took 1.017602s for pod "kube-apiserver-functional-499500" in "kube-system" namespace to be "Ready" ...
	I0318 10:59:41.732259   14240 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-499500" in "kube-system" namespace to be "Ready" ...
	I0318 10:59:41.732361   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-499500
	I0318 10:59:41.732361   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:41.732411   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:41.732411   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:41.735749   14240 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 10:59:41.735817   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:41.735817   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:41 GMT
	I0318 10:59:41.735817   14240 round_trippers.go:580]     Audit-Id: 900aaaba-8ce7-42f1-bf2e-cec9a73fc955
	I0318 10:59:41.735817   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:41.735817   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:41.735817   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:41.735817   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:41.736165   14240 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-499500","namespace":"kube-system","uid":"0d268f61-ae12-4f7c-835d-46539c8014f1","resourceVersion":"615","creationTimestamp":"2024-03-18T10:56:48Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7fbea5131bbed18cc0f38cdbd279a602","kubernetes.io/config.mirror":"7fbea5131bbed18cc0f38cdbd279a602","kubernetes.io/config.seen":"2024-03-18T10:56:47.998688110Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T10:56:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7177 chars]
	I0318 10:59:41.736763   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/nodes/functional-499500
	I0318 10:59:41.736763   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:41.736763   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:41.736763   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:41.741225   14240 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 10:59:41.741225   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:41.741225   14240 round_trippers.go:580]     Audit-Id: cd39ed07-db83-4ac0-a9db-f98f6c11d301
	I0318 10:59:41.741225   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:41.741225   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:41.741225   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:41.741225   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:41.741225   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:41 GMT
	I0318 10:59:41.741225   14240 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","resourceVersion":"547","creationTimestamp":"2024-03-18T10:56:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-499500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"functional-499500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T10_56_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T10:56:43Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0318 10:59:41.741225   14240 pod_ready.go:92] pod "kube-controller-manager-functional-499500" in "kube-system" namespace has status "Ready":"True"
	I0318 10:59:41.741225   14240 pod_ready.go:81] duration metric: took 8.9663ms for pod "kube-controller-manager-functional-499500" in "kube-system" namespace to be "Ready" ...
	I0318 10:59:41.741225   14240 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rm8c5" in "kube-system" namespace to be "Ready" ...
	I0318 10:59:41.741225   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods/kube-proxy-rm8c5
	I0318 10:59:41.741225   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:41.741225   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:41.741225   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:41.745206   14240 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 10:59:41.745206   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:41.745206   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:41.745206   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:41.745206   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:41.745206   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:41 GMT
	I0318 10:59:41.745206   14240 round_trippers.go:580]     Audit-Id: c4e1539d-e553-4141-88c2-c69505cb2805
	I0318 10:59:41.745206   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:41.745615   14240 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rm8c5","generateName":"kube-proxy-","namespace":"kube-system","uid":"ed875552-beb2-4ba2-9347-76450b017fa2","resourceVersion":"610","creationTimestamp":"2024-03-18T10:57:00Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"155407e9-6a0d-4f6c-84c3-caa04cfd81ee","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T10:57:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"155407e9-6a0d-4f6c-84c3-caa04cfd81ee\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5740 chars]
	I0318 10:59:41.746552   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/nodes/functional-499500
	I0318 10:59:41.746552   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:41.746629   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:41.746629   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:41.749930   14240 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 10:59:41.749930   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:41.749930   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:41 GMT
	I0318 10:59:41.750128   14240 round_trippers.go:580]     Audit-Id: 48785649-2928-4c5b-8eed-3b11c0c95a25
	I0318 10:59:41.750128   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:41.750128   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:41.750128   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:41.750128   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:41.750456   14240 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","resourceVersion":"547","creationTimestamp":"2024-03-18T10:56:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-499500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"functional-499500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T10_56_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T10:56:43Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0318 10:59:41.750957   14240 pod_ready.go:92] pod "kube-proxy-rm8c5" in "kube-system" namespace has status "Ready":"True"
	I0318 10:59:41.750957   14240 pod_ready.go:81] duration metric: took 9.7315ms for pod "kube-proxy-rm8c5" in "kube-system" namespace to be "Ready" ...
	I0318 10:59:41.750957   14240 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-499500" in "kube-system" namespace to be "Ready" ...
	I0318 10:59:41.751026   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-499500
	I0318 10:59:41.751113   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:41.751175   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:41.751175   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:41.753299   14240 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 10:59:41.753299   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:41.753299   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:41.754042   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:41.754042   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:41.754042   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:41 GMT
	I0318 10:59:41.754042   14240 round_trippers.go:580]     Audit-Id: 7169e5eb-8145-440f-9425-2a99f73f3636
	I0318 10:59:41.754144   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:41.754375   14240 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-499500","namespace":"kube-system","uid":"753b8bd0-5da0-4ffd-9b9f-186f575f8392","resourceVersion":"611","creationTimestamp":"2024-03-18T10:56:44Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fbeb8a40253ca29771ec7ba412e4b29f","kubernetes.io/config.mirror":"fbeb8a40253ca29771ec7ba412e4b29f","kubernetes.io/config.seen":"2024-03-18T10:56:38.158328411Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T10:56:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 4907 chars]
	I0318 10:59:41.755289   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/nodes/functional-499500
	I0318 10:59:41.755335   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:41.755335   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:41.755335   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:41.758086   14240 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 10:59:41.758086   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:41.758086   14240 round_trippers.go:580]     Audit-Id: b1fead4c-06f3-4217-962b-03d06d25c451
	I0318 10:59:41.758086   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:41.758086   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:41.758086   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:41.758086   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:41.758086   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:41 GMT
	I0318 10:59:41.758086   14240 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","resourceVersion":"547","creationTimestamp":"2024-03-18T10:56:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-499500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"functional-499500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T10_56_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T10:56:43Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0318 10:59:41.759390   14240 pod_ready.go:92] pod "kube-scheduler-functional-499500" in "kube-system" namespace has status "Ready":"True"
	I0318 10:59:41.759500   14240 pod_ready.go:81] duration metric: took 8.5437ms for pod "kube-scheduler-functional-499500" in "kube-system" namespace to be "Ready" ...
	I0318 10:59:41.759500   14240 pod_ready.go:38] duration metric: took 12.1297536s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 10:59:41.759500   14240 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 10:59:41.780113   14240 command_runner.go:130] > -16
	I0318 10:59:41.780763   14240 ops.go:34] apiserver oom_adj: -16
	I0318 10:59:41.780763   14240 kubeadm.go:591] duration metric: took 21.9481435s to restartPrimaryControlPlane
	I0318 10:59:41.780763   14240 kubeadm.go:393] duration metric: took 22.0167906s to StartCluster
	I0318 10:59:41.780903   14240 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 10:59:41.781000   14240 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0318 10:59:41.782529   14240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 10:59:41.783867   14240 start.go:234] Will wait 6m0s for node &{Name: IP:172.25.151.65 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 10:59:41.783867   14240 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 10:59:41.787308   14240 addons.go:69] Setting storage-provisioner=true in profile "functional-499500"
	I0318 10:59:41.784569   14240 config.go:182] Loaded profile config "functional-499500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 10:59:41.787308   14240 out.go:177] * Verifying Kubernetes components...
	I0318 10:59:41.787483   14240 addons.go:234] Setting addon storage-provisioner=true in "functional-499500"
	W0318 10:59:41.790957   14240 addons.go:243] addon storage-provisioner should already be in state true
	I0318 10:59:41.787483   14240 addons.go:69] Setting default-storageclass=true in profile "functional-499500"
	I0318 10:59:41.790987   14240 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-499500"
	I0318 10:59:41.790987   14240 host.go:66] Checking if "functional-499500" exists ...
	I0318 10:59:41.791616   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-499500 ).state
	I0318 10:59:41.791616   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-499500 ).state
	I0318 10:59:41.805591   14240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 10:59:42.116976   14240 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 10:59:42.146026   14240 node_ready.go:35] waiting up to 6m0s for node "functional-499500" to be "Ready" ...
	I0318 10:59:42.146026   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/nodes/functional-499500
	I0318 10:59:42.146026   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:42.146026   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:42.146026   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:42.151338   14240 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 10:59:42.151338   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:42.151338   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:42.151338   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:42.151338   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:42.151338   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:42.151338   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:42 GMT
	I0318 10:59:42.151338   14240 round_trippers.go:580]     Audit-Id: a76124aa-30f1-482b-b2cb-0f9347088e8f
	I0318 10:59:42.151338   14240 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","resourceVersion":"547","creationTimestamp":"2024-03-18T10:56:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-499500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"functional-499500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T10_56_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T10:56:43Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0318 10:59:42.152235   14240 node_ready.go:49] node "functional-499500" has status "Ready":"True"
	I0318 10:59:42.152294   14240 node_ready.go:38] duration metric: took 6.2095ms for node "functional-499500" to be "Ready" ...
	I0318 10:59:42.152294   14240 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 10:59:42.152631   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods
	I0318 10:59:42.152695   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:42.152695   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:42.152695   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:42.157976   14240 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 10:59:42.157976   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:42.158024   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:42.158024   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:42.158024   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:42 GMT
	I0318 10:59:42.158024   14240 round_trippers.go:580]     Audit-Id: a8e8bcc4-3a0e-4741-89aa-0b905374ea4e
	I0318 10:59:42.158024   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:42.158024   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:42.159603   14240 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"623"},"items":[{"metadata":{"name":"coredns-5dd5756b68-k65n6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eef8e07c-ce5d-4ef0-a8ec-2266dd920be2","resourceVersion":"609","creationTimestamp":"2024-03-18T10:57:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"17e1b190-888e-4a50-a300-2aa6dc04ffee","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T10:57:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"17e1b190-888e-4a50-a300-2aa6dc04ffee\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 48425 chars]
	I0318 10:59:42.161987   14240 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-k65n6" in "kube-system" namespace to be "Ready" ...
	I0318 10:59:42.162196   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-k65n6
	I0318 10:59:42.162196   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:42.162229   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:42.162229   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:42.166489   14240 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 10:59:42.166489   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:42.166489   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:42.166489   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:42.166489   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:42 GMT
	I0318 10:59:42.166489   14240 round_trippers.go:580]     Audit-Id: a0274624-fcb9-46af-beee-9fb7da5f0df9
	I0318 10:59:42.166489   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:42.166489   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:42.166489   14240 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-k65n6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eef8e07c-ce5d-4ef0-a8ec-2266dd920be2","resourceVersion":"609","creationTimestamp":"2024-03-18T10:57:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"17e1b190-888e-4a50-a300-2aa6dc04ffee","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T10:57:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"17e1b190-888e-4a50-a300-2aa6dc04ffee\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6155 chars]
	I0318 10:59:42.306517   14240 request.go:629] Waited for 139.1851ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.151.65:8441/api/v1/nodes/functional-499500
	I0318 10:59:42.306517   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/nodes/functional-499500
	I0318 10:59:42.306517   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:42.306517   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:42.306517   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:42.325520   14240 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0318 10:59:42.325665   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:42.325665   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:42.325665   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:42.325775   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:42.325775   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:42.325775   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:42 GMT
	I0318 10:59:42.325775   14240 round_trippers.go:580]     Audit-Id: 4b2e717b-f102-4994-8d32-ec7e38474fad
	I0318 10:59:42.326054   14240 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","resourceVersion":"547","creationTimestamp":"2024-03-18T10:56:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-499500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"functional-499500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T10_56_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T10:56:43Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0318 10:59:42.326784   14240 pod_ready.go:92] pod "coredns-5dd5756b68-k65n6" in "kube-system" namespace has status "Ready":"True"
	I0318 10:59:42.326784   14240 pod_ready.go:81] duration metric: took 164.7293ms for pod "coredns-5dd5756b68-k65n6" in "kube-system" namespace to be "Ready" ...
	I0318 10:59:42.326784   14240 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-499500" in "kube-system" namespace to be "Ready" ...
	I0318 10:59:42.512443   14240 request.go:629] Waited for 185.6575ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods/etcd-functional-499500
	I0318 10:59:42.512813   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods/etcd-functional-499500
	I0318 10:59:42.512813   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:42.512813   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:42.512813   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:42.521271   14240 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 10:59:42.521271   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:42.521502   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:42.521502   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:42.521502   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:42.521502   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:42.521502   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:42 GMT
	I0318 10:59:42.521564   14240 round_trippers.go:580]     Audit-Id: 6535cee5-0a97-45e7-af73-575d6dd249dd
	I0318 10:59:42.522999   14240 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-499500","namespace":"kube-system","uid":"6cf47567-9439-4c90-9a6f-0c703184b674","resourceVersion":"617","creationTimestamp":"2024-03-18T10:56:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.151.65:2379","kubernetes.io/config.hash":"22a3c8cb2de99f2bf584b7dd6902ce82","kubernetes.io/config.mirror":"22a3c8cb2de99f2bf584b7dd6902ce82","kubernetes.io/config.seen":"2024-03-18T10:56:47.998681810Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T10:56:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6076 chars]
	I0318 10:59:42.706092   14240 request.go:629] Waited for 182.0909ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.151.65:8441/api/v1/nodes/functional-499500
	I0318 10:59:42.706408   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/nodes/functional-499500
	I0318 10:59:42.706408   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:42.706408   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:42.706408   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:42.711750   14240 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 10:59:42.711775   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:42.711775   14240 round_trippers.go:580]     Audit-Id: e9c4da12-6652-48b0-bcd0-0b7668afe263
	I0318 10:59:42.711775   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:42.711775   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:42.711775   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:42.711775   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:42.711775   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:42 GMT
	I0318 10:59:42.712696   14240 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","resourceVersion":"547","creationTimestamp":"2024-03-18T10:56:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-499500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"functional-499500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T10_56_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T10:56:43Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0318 10:59:42.713542   14240 pod_ready.go:92] pod "etcd-functional-499500" in "kube-system" namespace has status "Ready":"True"
	I0318 10:59:42.713542   14240 pod_ready.go:81] duration metric: took 386.7548ms for pod "etcd-functional-499500" in "kube-system" namespace to be "Ready" ...
	I0318 10:59:42.713665   14240 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-499500" in "kube-system" namespace to be "Ready" ...
	I0318 10:59:42.912321   14240 request.go:629] Waited for 198.3915ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-499500
	I0318 10:59:42.912527   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-499500
	I0318 10:59:42.912527   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:42.912527   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:42.912527   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:42.921161   14240 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 10:59:42.921161   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:42.921161   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:42.921161   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:42 GMT
	I0318 10:59:42.921161   14240 round_trippers.go:580]     Audit-Id: e6aadc9e-827d-4f96-a8b7-e697f4602422
	I0318 10:59:42.921161   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:42.921161   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:42.921161   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:42.921161   14240 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-499500","namespace":"kube-system","uid":"0c41fd26-6547-4cf0-ac99-344dcef1194a","resourceVersion":"623","creationTimestamp":"2024-03-18T10:56:46Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.25.151.65:8441","kubernetes.io/config.hash":"8b83d512506d3345291ef83d5b9e76e6","kubernetes.io/config.mirror":"8b83d512506d3345291ef83d5b9e76e6","kubernetes.io/config.seen":"2024-03-18T10:56:38.158338711Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T10:56:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7609 chars]
	I0318 10:59:43.118985   14240 request.go:629] Waited for 196.6665ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.151.65:8441/api/v1/nodes/functional-499500
	I0318 10:59:43.118985   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/nodes/functional-499500
	I0318 10:59:43.118985   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:43.118985   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:43.118985   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:43.124061   14240 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 10:59:43.124061   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:43.124061   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:43.124061   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:43.124061   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:43 GMT
	I0318 10:59:43.124061   14240 round_trippers.go:580]     Audit-Id: 06d2720b-9eaa-4727-a741-961a50251619
	I0318 10:59:43.124061   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:43.124061   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:43.124061   14240 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","resourceVersion":"547","creationTimestamp":"2024-03-18T10:56:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-499500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"functional-499500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T10_56_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T10:56:43Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0318 10:59:43.124754   14240 pod_ready.go:92] pod "kube-apiserver-functional-499500" in "kube-system" namespace has status "Ready":"True"
	I0318 10:59:43.124817   14240 pod_ready.go:81] duration metric: took 411.0863ms for pod "kube-apiserver-functional-499500" in "kube-system" namespace to be "Ready" ...
	I0318 10:59:43.124817   14240 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-499500" in "kube-system" namespace to be "Ready" ...
	I0318 10:59:43.310038   14240 request.go:629] Waited for 185.0381ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-499500
	I0318 10:59:43.310187   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-499500
	I0318 10:59:43.310245   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:43.310245   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:43.310245   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:43.314331   14240 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 10:59:43.314331   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:43.314331   14240 round_trippers.go:580]     Audit-Id: d96b29f5-68c3-4bcc-be51-6147b59b54cf
	I0318 10:59:43.314578   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:43.314578   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:43.314578   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:43.314631   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:43.314667   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:43 GMT
	I0318 10:59:43.315222   14240 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-499500","namespace":"kube-system","uid":"0d268f61-ae12-4f7c-835d-46539c8014f1","resourceVersion":"615","creationTimestamp":"2024-03-18T10:56:48Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7fbea5131bbed18cc0f38cdbd279a602","kubernetes.io/config.mirror":"7fbea5131bbed18cc0f38cdbd279a602","kubernetes.io/config.seen":"2024-03-18T10:56:47.998688110Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T10:56:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7177 chars]
	I0318 10:59:43.518062   14240 request.go:629] Waited for 202.0814ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.151.65:8441/api/v1/nodes/functional-499500
	I0318 10:59:43.518062   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/nodes/functional-499500
	I0318 10:59:43.518062   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:43.518062   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:43.518062   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:43.521585   14240 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 10:59:43.521585   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:43.521585   14240 round_trippers.go:580]     Audit-Id: 35aeca6c-66b7-4d8c-b947-63baa4186d69
	I0318 10:59:43.521585   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:43.521585   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:43.521585   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:43.522721   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:43.522721   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:43 GMT
	I0318 10:59:43.522980   14240 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","resourceVersion":"547","creationTimestamp":"2024-03-18T10:56:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-499500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"functional-499500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T10_56_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T10:56:43Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0318 10:59:43.523439   14240 pod_ready.go:92] pod "kube-controller-manager-functional-499500" in "kube-system" namespace has status "Ready":"True"
	I0318 10:59:43.523439   14240 pod_ready.go:81] duration metric: took 398.619ms for pod "kube-controller-manager-functional-499500" in "kube-system" namespace to be "Ready" ...
	I0318 10:59:43.523439   14240 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rm8c5" in "kube-system" namespace to be "Ready" ...
	I0318 10:59:43.708446   14240 request.go:629] Waited for 185.0059ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods/kube-proxy-rm8c5
	I0318 10:59:43.708823   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods/kube-proxy-rm8c5
	I0318 10:59:43.708884   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:43.708884   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:43.708884   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:43.713193   14240 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 10:59:43.713193   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:43.713419   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:43.713419   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:43.713419   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:43.713419   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:43 GMT
	I0318 10:59:43.713419   14240 round_trippers.go:580]     Audit-Id: 1605717b-0047-4721-a72b-1c39950afe06
	I0318 10:59:43.713419   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:43.713504   14240 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rm8c5","generateName":"kube-proxy-","namespace":"kube-system","uid":"ed875552-beb2-4ba2-9347-76450b017fa2","resourceVersion":"610","creationTimestamp":"2024-03-18T10:57:00Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"155407e9-6a0d-4f6c-84c3-caa04cfd81ee","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T10:57:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"155407e9-6a0d-4f6c-84c3-caa04cfd81ee\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5740 chars]
	I0318 10:59:43.915762   14240 request.go:629] Waited for 201.2839ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.151.65:8441/api/v1/nodes/functional-499500
	I0318 10:59:43.915867   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/nodes/functional-499500
	I0318 10:59:43.915867   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:43.915867   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:43.915867   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:43.923190   14240 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 10:59:43.923190   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:43.923190   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:43.923190   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:43.923190   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:43.923190   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:43 GMT
	I0318 10:59:43.923190   14240 round_trippers.go:580]     Audit-Id: 1f1dffe2-14ee-4942-94b8-48040cd91466
	I0318 10:59:43.923190   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:43.923783   14240 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","resourceVersion":"547","creationTimestamp":"2024-03-18T10:56:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-499500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"functional-499500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T10_56_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T10:56:43Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0318 10:59:43.924483   14240 pod_ready.go:92] pod "kube-proxy-rm8c5" in "kube-system" namespace has status "Ready":"True"
	I0318 10:59:43.924547   14240 pod_ready.go:81] duration metric: took 401.1057ms for pod "kube-proxy-rm8c5" in "kube-system" namespace to be "Ready" ...
	I0318 10:59:43.924606   14240 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-499500" in "kube-system" namespace to be "Ready" ...
	I0318 10:59:44.106030   14240 request.go:629] Waited for 181.0802ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-499500
	I0318 10:59:44.106030   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-499500
	I0318 10:59:44.106162   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:44.106162   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:44.106162   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:44.111055   14240 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 10:59:44.111055   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:44.111247   14240 round_trippers.go:580]     Audit-Id: e0baeac8-d87c-4a58-af60-b72b1238785c
	I0318 10:59:44.111247   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:44.111247   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:44.111247   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:44.111247   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:44.111247   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:44 GMT
	I0318 10:59:44.111433   14240 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-499500","namespace":"kube-system","uid":"753b8bd0-5da0-4ffd-9b9f-186f575f8392","resourceVersion":"611","creationTimestamp":"2024-03-18T10:56:44Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fbeb8a40253ca29771ec7ba412e4b29f","kubernetes.io/config.mirror":"fbeb8a40253ca29771ec7ba412e4b29f","kubernetes.io/config.seen":"2024-03-18T10:56:38.158328411Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T10:56:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 4907 chars]
	I0318 10:59:44.127342   14240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:59:44.127342   14240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:59:44.128142   14240 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:59:44.127342   14240 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:59:44.131525   14240 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 10:59:44.129000   14240 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0318 10:59:44.133815   14240 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 10:59:44.133815   14240 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 10:59:44.133815   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-499500 ).state
	I0318 10:59:44.134444   14240 kapi.go:59] client config for functional-499500: &rest.Config{Host:"https://172.25.151.65:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-499500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-499500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x226b2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 10:59:44.135117   14240 addons.go:234] Setting addon default-storageclass=true in "functional-499500"
	W0318 10:59:44.135117   14240 addons.go:243] addon default-storageclass should already be in state true
	I0318 10:59:44.135117   14240 host.go:66] Checking if "functional-499500" exists ...
	I0318 10:59:44.137372   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-499500 ).state
	I0318 10:59:44.312406   14240 request.go:629] Waited for 200.1883ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.151.65:8441/api/v1/nodes/functional-499500
	I0318 10:59:44.312406   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/nodes/functional-499500
	I0318 10:59:44.312677   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:44.312677   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:44.312677   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:44.317082   14240 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 10:59:44.317082   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:44.317082   14240 round_trippers.go:580]     Audit-Id: b46d6c24-070a-4eca-8848-361b20fef627
	I0318 10:59:44.317082   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:44.317082   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:44.317082   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:44.317082   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:44.317082   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:44 GMT
	I0318 10:59:44.317440   14240 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","resourceVersion":"547","creationTimestamp":"2024-03-18T10:56:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-499500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"functional-499500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T10_56_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T10:56:43Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0318 10:59:44.317981   14240 pod_ready.go:92] pod "kube-scheduler-functional-499500" in "kube-system" namespace has status "Ready":"True"
	I0318 10:59:44.317981   14240 pod_ready.go:81] duration metric: took 393.373ms for pod "kube-scheduler-functional-499500" in "kube-system" namespace to be "Ready" ...
	I0318 10:59:44.318129   14240 pod_ready.go:38] duration metric: took 2.1656731s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 10:59:44.318129   14240 api_server.go:52] waiting for apiserver process to appear ...
	I0318 10:59:44.330099   14240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 10:59:44.360904   14240 command_runner.go:130] > 6433
	I0318 10:59:44.361014   14240 api_server.go:72] duration metric: took 2.5770206s to wait for apiserver process to appear ...
	I0318 10:59:44.361014   14240 api_server.go:88] waiting for apiserver healthz status ...
	I0318 10:59:44.361014   14240 api_server.go:253] Checking apiserver healthz at https://172.25.151.65:8441/healthz ...
	I0318 10:59:44.370828   14240 api_server.go:279] https://172.25.151.65:8441/healthz returned 200:
	ok
	I0318 10:59:44.371448   14240 round_trippers.go:463] GET https://172.25.151.65:8441/version
	I0318 10:59:44.371514   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:44.371514   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:44.371514   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:44.373639   14240 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 10:59:44.373801   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:44.373854   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:44.373854   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:44.373854   14240 round_trippers.go:580]     Content-Length: 264
	I0318 10:59:44.373854   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:44 GMT
	I0318 10:59:44.373854   14240 round_trippers.go:580]     Audit-Id: 23063761-b879-48c2-9ca8-c0fd48d0e6d5
	I0318 10:59:44.373854   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:44.373854   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:44.373916   14240 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0318 10:59:44.373982   14240 api_server.go:141] control plane version: v1.28.4
	I0318 10:59:44.373982   14240 api_server.go:131] duration metric: took 12.9673ms to wait for apiserver health ...
	I0318 10:59:44.373982   14240 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 10:59:44.516063   14240 request.go:629] Waited for 141.9236ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods
	I0318 10:59:44.516436   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods
	I0318 10:59:44.516436   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:44.516436   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:44.516436   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:44.522029   14240 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 10:59:44.522029   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:44.522029   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:44.522029   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:44.522538   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:44.522538   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:44.522538   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:44 GMT
	I0318 10:59:44.522538   14240 round_trippers.go:580]     Audit-Id: 1e557ca6-a811-45e7-bc0d-a840093a4200
	I0318 10:59:44.525619   14240 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"627"},"items":[{"metadata":{"name":"coredns-5dd5756b68-k65n6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eef8e07c-ce5d-4ef0-a8ec-2266dd920be2","resourceVersion":"609","creationTimestamp":"2024-03-18T10:57:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"17e1b190-888e-4a50-a300-2aa6dc04ffee","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T10:57:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"17e1b190-888e-4a50-a300-2aa6dc04ffee\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 48031 chars]
	I0318 10:59:44.528097   14240 system_pods.go:59] 7 kube-system pods found
	I0318 10:59:44.528097   14240 system_pods.go:61] "coredns-5dd5756b68-k65n6" [eef8e07c-ce5d-4ef0-a8ec-2266dd920be2] Running
	I0318 10:59:44.528097   14240 system_pods.go:61] "etcd-functional-499500" [6cf47567-9439-4c90-9a6f-0c703184b674] Running
	I0318 10:59:44.528097   14240 system_pods.go:61] "kube-apiserver-functional-499500" [0c41fd26-6547-4cf0-ac99-344dcef1194a] Running
	I0318 10:59:44.528097   14240 system_pods.go:61] "kube-controller-manager-functional-499500" [0d268f61-ae12-4f7c-835d-46539c8014f1] Running
	I0318 10:59:44.528097   14240 system_pods.go:61] "kube-proxy-rm8c5" [ed875552-beb2-4ba2-9347-76450b017fa2] Running
	I0318 10:59:44.528097   14240 system_pods.go:61] "kube-scheduler-functional-499500" [753b8bd0-5da0-4ffd-9b9f-186f575f8392] Running
	I0318 10:59:44.528097   14240 system_pods.go:61] "storage-provisioner" [8c751c63-0f7c-44a2-a6ae-f56aca9513fd] Running
	I0318 10:59:44.528097   14240 system_pods.go:74] duration metric: took 154.114ms to wait for pod list to return data ...
	I0318 10:59:44.528097   14240 default_sa.go:34] waiting for default service account to be created ...
	I0318 10:59:44.705905   14240 request.go:629] Waited for 177.642ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.151.65:8441/api/v1/namespaces/default/serviceaccounts
	I0318 10:59:44.706169   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/namespaces/default/serviceaccounts
	I0318 10:59:44.706305   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:44.706305   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:44.706305   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:44.710788   14240 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 10:59:44.711593   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:44.711593   14240 round_trippers.go:580]     Audit-Id: 50eb9fba-83e2-4ae6-abae-0d1cb29f26a7
	I0318 10:59:44.711593   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:44.711593   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:44.711593   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:44.711593   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:44.711593   14240 round_trippers.go:580]     Content-Length: 261
	I0318 10:59:44.711593   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:44 GMT
	I0318 10:59:44.711593   14240 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"627"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"c52e0a5d-3cc9-4493-889e-5e4a5b7b37d4","resourceVersion":"336","creationTimestamp":"2024-03-18T10:56:59Z"}}]}
	I0318 10:59:44.712012   14240 default_sa.go:45] found service account: "default"
	I0318 10:59:44.712012   14240 default_sa.go:55] duration metric: took 183.9144ms for default service account to be created ...
	I0318 10:59:44.712012   14240 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 10:59:44.904513   14240 request.go:629] Waited for 192.2555ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods
	I0318 10:59:44.904619   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/namespaces/kube-system/pods
	I0318 10:59:44.904619   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:44.904709   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:44.904709   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:44.910451   14240 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 10:59:44.910967   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:44.910967   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:44.910967   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:44.911056   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:44.911056   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:44.911056   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:44 GMT
	I0318 10:59:44.911125   14240 round_trippers.go:580]     Audit-Id: 5680a0f3-5ec3-4164-8929-de755be6fd4c
	I0318 10:59:44.912752   14240 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"627"},"items":[{"metadata":{"name":"coredns-5dd5756b68-k65n6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eef8e07c-ce5d-4ef0-a8ec-2266dd920be2","resourceVersion":"609","creationTimestamp":"2024-03-18T10:57:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"17e1b190-888e-4a50-a300-2aa6dc04ffee","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T10:57:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"17e1b190-888e-4a50-a300-2aa6dc04ffee\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 48031 chars]
	I0318 10:59:44.915274   14240 system_pods.go:86] 7 kube-system pods found
	I0318 10:59:44.915409   14240 system_pods.go:89] "coredns-5dd5756b68-k65n6" [eef8e07c-ce5d-4ef0-a8ec-2266dd920be2] Running
	I0318 10:59:44.915409   14240 system_pods.go:89] "etcd-functional-499500" [6cf47567-9439-4c90-9a6f-0c703184b674] Running
	I0318 10:59:44.915409   14240 system_pods.go:89] "kube-apiserver-functional-499500" [0c41fd26-6547-4cf0-ac99-344dcef1194a] Running
	I0318 10:59:44.915409   14240 system_pods.go:89] "kube-controller-manager-functional-499500" [0d268f61-ae12-4f7c-835d-46539c8014f1] Running
	I0318 10:59:44.915409   14240 system_pods.go:89] "kube-proxy-rm8c5" [ed875552-beb2-4ba2-9347-76450b017fa2] Running
	I0318 10:59:44.915409   14240 system_pods.go:89] "kube-scheduler-functional-499500" [753b8bd0-5da0-4ffd-9b9f-186f575f8392] Running
	I0318 10:59:44.915409   14240 system_pods.go:89] "storage-provisioner" [8c751c63-0f7c-44a2-a6ae-f56aca9513fd] Running
	I0318 10:59:44.915409   14240 system_pods.go:126] duration metric: took 203.3949ms to wait for k8s-apps to be running ...
	I0318 10:59:44.915523   14240 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 10:59:44.927557   14240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 10:59:44.954689   14240 system_svc.go:56] duration metric: took 39.1658ms WaitForService to wait for kubelet
	I0318 10:59:44.954689   14240 kubeadm.go:576] duration metric: took 3.1708018s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 10:59:44.954689   14240 node_conditions.go:102] verifying NodePressure condition ...
	I0318 10:59:45.104419   14240 request.go:629] Waited for 149.6194ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.151.65:8441/api/v1/nodes
	I0318 10:59:45.104735   14240 round_trippers.go:463] GET https://172.25.151.65:8441/api/v1/nodes
	I0318 10:59:45.104735   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:45.104735   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:45.104735   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:45.109325   14240 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 10:59:45.109988   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:45.109988   14240 round_trippers.go:580]     Audit-Id: bb18cfec-e8d6-49d9-b008-87f58bdcc480
	I0318 10:59:45.109988   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:45.109988   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:45.109988   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:45.109988   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:45.109988   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:45 GMT
	I0318 10:59:45.110519   14240 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"627"},"items":[{"metadata":{"name":"functional-499500","uid":"557dbbfe-a067-46b0-be28-19b255fdddf2","resourceVersion":"547","creationTimestamp":"2024-03-18T10:56:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-499500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"functional-499500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T10_56_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4839 chars]
	I0318 10:59:45.110761   14240 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 10:59:45.110761   14240 node_conditions.go:123] node cpu capacity is 2
	I0318 10:59:45.110761   14240 node_conditions.go:105] duration metric: took 156.0704ms to run NodePressure ...
	I0318 10:59:45.110761   14240 start.go:240] waiting for startup goroutines ...
	I0318 10:59:46.420171   14240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:59:46.420171   14240 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:59:46.420263   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-499500 ).networkadapters[0]).ipaddresses[0]
	I0318 10:59:46.453673   14240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:59:46.453673   14240 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:59:46.453985   14240 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 10:59:46.454030   14240 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 10:59:46.454116   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-499500 ).state
	I0318 10:59:48.697818   14240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 10:59:48.697818   14240 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:59:48.697818   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-499500 ).networkadapters[0]).ipaddresses[0]
	I0318 10:59:49.103001   14240 main.go:141] libmachine: [stdout =====>] : 172.25.151.65
	
	I0318 10:59:49.103001   14240 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:59:49.103001   14240 sshutil.go:53] new ssh client: &{IP:172.25.151.65 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-499500\id_rsa Username:docker}
	I0318 10:59:49.251048   14240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 10:59:50.421195   14240 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0318 10:59:50.421325   14240 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0318 10:59:50.421325   14240 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0318 10:59:50.421426   14240 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0318 10:59:50.421426   14240 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0318 10:59:50.421426   14240 command_runner.go:130] > pod/storage-provisioner configured
	I0318 10:59:50.421426   14240 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.17037s)
	I0318 10:59:51.327609   14240 main.go:141] libmachine: [stdout =====>] : 172.25.151.65
	
	I0318 10:59:51.327609   14240 main.go:141] libmachine: [stderr =====>] : 
	I0318 10:59:51.329195   14240 sshutil.go:53] new ssh client: &{IP:172.25.151.65 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-499500\id_rsa Username:docker}
	I0318 10:59:51.477287   14240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 10:59:51.712825   14240 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0318 10:59:51.713331   14240 round_trippers.go:463] GET https://172.25.151.65:8441/apis/storage.k8s.io/v1/storageclasses
	I0318 10:59:51.713331   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:51.713331   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:51.713331   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:51.715779   14240 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 10:59:51.715779   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:51.715779   14240 round_trippers.go:580]     Content-Length: 1273
	I0318 10:59:51.715779   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:51 GMT
	I0318 10:59:51.715779   14240 round_trippers.go:580]     Audit-Id: 77638996-465e-42e4-99eb-9b737d07fc6d
	I0318 10:59:51.715779   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:51.715779   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:51.715779   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:51.715779   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:51.716715   14240 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"630"},"items":[{"metadata":{"name":"standard","uid":"d4c11230-2b65-40a4-9408-c937acad0863","resourceVersion":"428","creationTimestamp":"2024-03-18T10:57:10Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-03-18T10:57:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0318 10:59:51.716864   14240 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d4c11230-2b65-40a4-9408-c937acad0863","resourceVersion":"428","creationTimestamp":"2024-03-18T10:57:10Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-03-18T10:57:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0318 10:59:51.716864   14240 round_trippers.go:463] PUT https://172.25.151.65:8441/apis/storage.k8s.io/v1/storageclasses/standard
	I0318 10:59:51.716864   14240 round_trippers.go:469] Request Headers:
	I0318 10:59:51.716864   14240 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 10:59:51.716864   14240 round_trippers.go:473]     Accept: application/json, */*
	I0318 10:59:51.716864   14240 round_trippers.go:473]     Content-Type: application/json
	I0318 10:59:51.720723   14240 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 10:59:51.720723   14240 round_trippers.go:577] Response Headers:
	I0318 10:59:51.720723   14240 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9602d49a-5802-49f7-ba73-14985fa07d01
	I0318 10:59:51.720723   14240 round_trippers.go:580]     Content-Length: 1220
	I0318 10:59:51.720723   14240 round_trippers.go:580]     Date: Mon, 18 Mar 2024 10:59:51 GMT
	I0318 10:59:51.720723   14240 round_trippers.go:580]     Audit-Id: 36a82365-719d-4ec2-b1fb-9338ced7c411
	I0318 10:59:51.720723   14240 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 10:59:51.720723   14240 round_trippers.go:580]     Content-Type: application/json
	I0318 10:59:51.720723   14240 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af8f0ca5-0280-466e-9ee1-4180e65d18b0
	I0318 10:59:51.721211   14240 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d4c11230-2b65-40a4-9408-c937acad0863","resourceVersion":"428","creationTimestamp":"2024-03-18T10:57:10Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-03-18T10:57:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0318 10:59:51.725054   14240 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0318 10:59:51.728874   14240 addons.go:505] duration metric: took 9.9449434s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0318 10:59:51.728874   14240 start.go:245] waiting for cluster config update ...
	I0318 10:59:51.728874   14240 start.go:254] writing updated cluster config ...
	I0318 10:59:51.740678   14240 ssh_runner.go:195] Run: rm -f paused
	I0318 10:59:51.891092   14240 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 10:59:51.898167   14240 out.go:177] * Done! kubectl is now configured to use "functional-499500" cluster and "default" namespace by default
	
	
	==> Docker <==
	Mar 18 10:59:29 functional-499500 dockerd[5440]: time="2024-03-18T10:59:29.008934080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 10:59:29 functional-499500 dockerd[5440]: time="2024-03-18T10:59:29.009147302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 10:59:29 functional-499500 cri-dockerd[5659]: time="2024-03-18T10:59:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c2b86e3b150bc1e03df7d61b35167eb2d2e55f34e1260f35a4e384ff2e57b771/resolv.conf as [nameserver 172.25.144.1]"
	Mar 18 10:59:29 functional-499500 cri-dockerd[5659]: time="2024-03-18T10:59:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2ee07f0ccafdd32a96a6e6aa5208b27d59fff42830a30ea944b6e923bcd47755/resolv.conf as [nameserver 172.25.144.1]"
	Mar 18 10:59:29 functional-499500 dockerd[5440]: time="2024-03-18T10:59:29.531482926Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 10:59:29 functional-499500 dockerd[5440]: time="2024-03-18T10:59:29.532104789Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 10:59:29 functional-499500 dockerd[5440]: time="2024-03-18T10:59:29.532436522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 10:59:29 functional-499500 cri-dockerd[5659]: time="2024-03-18T10:59:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a072d6f4ed2b78ea4aedb30abc5a6941d128df1aaead5477357f14db5e6de5b8/resolv.conf as [nameserver 172.25.144.1]"
	Mar 18 10:59:29 functional-499500 dockerd[5440]: time="2024-03-18T10:59:29.536355320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 10:59:29 functional-499500 dockerd[5440]: time="2024-03-18T10:59:29.620234235Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 10:59:29 functional-499500 dockerd[5440]: time="2024-03-18T10:59:29.620663779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 10:59:29 functional-499500 dockerd[5440]: time="2024-03-18T10:59:29.642994248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 10:59:29 functional-499500 dockerd[5440]: time="2024-03-18T10:59:29.643154764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 10:59:29 functional-499500 dockerd[5425]: time="2024-03-18T10:59:29.869701013Z" level=info msg="ignoring event" container=e6279db69344099860e18d25151e4667227a5ee419ca18c52630d483309ff6af module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 10:59:29 functional-499500 dockerd[5440]: time="2024-03-18T10:59:29.872397489Z" level=info msg="shim disconnected" id=e6279db69344099860e18d25151e4667227a5ee419ca18c52630d483309ff6af namespace=moby
	Mar 18 10:59:29 functional-499500 dockerd[5440]: time="2024-03-18T10:59:29.873453296Z" level=warning msg="cleaning up after shim disconnected" id=e6279db69344099860e18d25151e4667227a5ee419ca18c52630d483309ff6af namespace=moby
	Mar 18 10:59:29 functional-499500 dockerd[5440]: time="2024-03-18T10:59:29.873705922Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 10:59:29 functional-499500 dockerd[5440]: time="2024-03-18T10:59:29.985449540Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 10:59:29 functional-499500 dockerd[5440]: time="2024-03-18T10:59:29.986446842Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 10:59:29 functional-499500 dockerd[5440]: time="2024-03-18T10:59:29.986658164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 10:59:29 functional-499500 dockerd[5440]: time="2024-03-18T10:59:29.987402340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 10:59:42 functional-499500 dockerd[5440]: time="2024-03-18T10:59:42.454619480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 10:59:42 functional-499500 dockerd[5440]: time="2024-03-18T10:59:42.454825777Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 10:59:42 functional-499500 dockerd[5440]: time="2024-03-18T10:59:42.454864976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 10:59:42 functional-499500 dockerd[5440]: time="2024-03-18T10:59:42.456421354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	7897fad71287f       6e38f40d628db       About a minute ago   Running             storage-provisioner       2                   c2b86e3b150bc       storage-provisioner
	2c14928ba8045       ead0a4a53df89       2 minutes ago        Running             coredns                   1                   a072d6f4ed2b7       coredns-5dd5756b68-k65n6
	af70d684fae75       83f6cc407eed8       2 minutes ago        Running             kube-proxy                1                   2ee07f0ccafdd       kube-proxy-rm8c5
	e6279db693440       6e38f40d628db       2 minutes ago        Exited              storage-provisioner       1                   c2b86e3b150bc       storage-provisioner
	57d1f9fbe10b6       e3db313c6dbc0       2 minutes ago        Running             kube-scheduler            1                   c4aeb6d8a1706       kube-scheduler-functional-499500
	426300e81a5e2       7fe0e6f37db33       2 minutes ago        Running             kube-apiserver            1                   f341d0ffd5836       kube-apiserver-functional-499500
	11d0a107fa0c5       d058aa5ab969c       2 minutes ago        Running             kube-controller-manager   1                   ba7979656565a       kube-controller-manager-functional-499500
	829b21257daa2       73deb9a3f7025       2 minutes ago        Running             etcd                      1                   70470eb020f3c       etcd-functional-499500
	775a09aa04f6f       ead0a4a53df89       4 minutes ago        Exited              coredns                   0                   aab0bdb305c4f       coredns-5dd5756b68-k65n6
	aa0ce1c414298       83f6cc407eed8       4 minutes ago        Exited              kube-proxy                0                   70e7eb023004b       kube-proxy-rm8c5
	6d29ea5cd6091       73deb9a3f7025       5 minutes ago        Exited              etcd                      0                   b0896172ad0a3       etcd-functional-499500
	d3608688bd85c       d058aa5ab969c       5 minutes ago        Exited              kube-controller-manager   0                   cee324202f977       kube-controller-manager-functional-499500
	ff1e361f0d677       e3db313c6dbc0       5 minutes ago        Exited              kube-scheduler            0                   598b6fbfbc841       kube-scheduler-functional-499500
	fe08297df9b17       7fe0e6f37db33       5 minutes ago        Exited              kube-apiserver            0                   d2a94d355f441       kube-apiserver-functional-499500
	
	
	==> coredns [2c14928ba804] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 07d6393480c36cc6b464d3853a5e32028517fcba50e93adef34ce624ca099b3a1e269a86e99bf5086a15610de9e11b2980c233f8d3dcbff38f702488f0fd5328
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41977 - 19261 "HINFO IN 1465043691978008128.4505766762807485834. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.035855857s
	
	
	==> coredns [775a09aa04f6] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 07d6393480c36cc6b464d3853a5e32028517fcba50e93adef34ce624ca099b3a1e269a86e99bf5086a15610de9e11b2980c233f8d3dcbff38f702488f0fd5328
	[INFO] Reloading complete
	[INFO] 127.0.0.1:37558 - 47528 "HINFO IN 1985894299834504483.8521242039118798190. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.034393904s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-499500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-499500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd
	                    minikube.k8s.io/name=functional-499500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T10_56_48_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 10:56:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-499500
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 11:01:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 11:01:30 +0000   Mon, 18 Mar 2024 10:56:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 11:01:30 +0000   Mon, 18 Mar 2024 10:56:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 11:01:30 +0000   Mon, 18 Mar 2024 10:56:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 11:01:30 +0000   Mon, 18 Mar 2024 10:56:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.151.65
	  Hostname:    functional-499500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912876Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912876Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf33400de0f74c18a6667678a50bc3d1
	  System UUID:                fb5beab4-9a11-9c49-b849-db909b50b62e
	  Boot ID:                    2b668ceb-6caf-4c74-ac30-5afb37bb7650
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-k65n6                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m41s
	  kube-system                 etcd-functional-499500                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m53s
	  kube-system                 kube-apiserver-functional-499500             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m55s
	  kube-system                 kube-controller-manager-functional-499500    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  kube-system                 kube-proxy-rm8c5                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	  kube-system                 kube-scheduler-functional-499500             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m57s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m38s                  kube-proxy       
	  Normal  Starting                 2m11s                  kube-proxy       
	  Normal  NodeHasSufficientPID     5m3s (x7 over 5m3s)    kubelet          Node functional-499500 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    5m3s (x8 over 5m3s)    kubelet          Node functional-499500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m3s (x8 over 5m3s)    kubelet          Node functional-499500 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  5m3s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m53s                  kubelet          Node functional-499500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m53s                  kubelet          Node functional-499500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m53s                  kubelet          Node functional-499500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m52s                  kubelet          Node functional-499500 status is now: NodeReady
	  Normal  RegisteredNode           4m42s                  node-controller  Node functional-499500 event: Registered Node functional-499500 in Controller
	  Normal  Starting                 2m19s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m19s (x8 over 2m19s)  kubelet          Node functional-499500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m19s (x8 over 2m19s)  kubelet          Node functional-499500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m19s (x7 over 2m19s)  kubelet          Node functional-499500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m1s                   node-controller  Node functional-499500 event: Registered Node functional-499500 in Controller
	
	
	==> dmesg <==
	[  +5.180930] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.713211] systemd-fstab-generator[1524]: Ignoring "noauto" option for root device
	[  +6.877492] systemd-fstab-generator[1795]: Ignoring "noauto" option for root device
	[  +0.116431] kauditd_printk_skb: 51 callbacks suppressed
	[  +9.875011] systemd-fstab-generator[2813]: Ignoring "noauto" option for root device
	[  +0.162070] kauditd_printk_skb: 62 callbacks suppressed
	[Mar18 10:57] systemd-fstab-generator[3304]: Ignoring "noauto" option for root device
	[  +0.238086] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.342345] kauditd_printk_skb: 80 callbacks suppressed
	[ +31.932548] kauditd_printk_skb: 8 callbacks suppressed
	[Mar18 10:59] systemd-fstab-generator[4952]: Ignoring "noauto" option for root device
	[  +0.711484] systemd-fstab-generator[4989]: Ignoring "noauto" option for root device
	[  +0.302659] systemd-fstab-generator[5001]: Ignoring "noauto" option for root device
	[  +0.338781] systemd-fstab-generator[5015]: Ignoring "noauto" option for root device
	[  +5.420414] kauditd_printk_skb: 91 callbacks suppressed
	[  +7.887980] systemd-fstab-generator[5609]: Ignoring "noauto" option for root device
	[  +0.229552] systemd-fstab-generator[5621]: Ignoring "noauto" option for root device
	[  +0.227566] systemd-fstab-generator[5633]: Ignoring "noauto" option for root device
	[  +0.294126] systemd-fstab-generator[5647]: Ignoring "noauto" option for root device
	[  +0.912831] systemd-fstab-generator[5801]: Ignoring "noauto" option for root device
	[  +3.399780] systemd-fstab-generator[5937]: Ignoring "noauto" option for root device
	[  +0.120014] kauditd_printk_skb: 139 callbacks suppressed
	[  +7.015349] kauditd_printk_skb: 52 callbacks suppressed
	[ +11.412365] kauditd_printk_skb: 28 callbacks suppressed
	[  +1.483995] systemd-fstab-generator[7061]: Ignoring "noauto" option for root device
	
	
	==> etcd [6d29ea5cd609] <==
	{"level":"info","ts":"2024-03-18T10:56:41.056203Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe0434ae4b6f4b95 became candidate at term 2"}
	{"level":"info","ts":"2024-03-18T10:56:41.058693Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe0434ae4b6f4b95 received MsgVoteResp from fe0434ae4b6f4b95 at term 2"}
	{"level":"info","ts":"2024-03-18T10:56:41.059097Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe0434ae4b6f4b95 became leader at term 2"}
	{"level":"info","ts":"2024-03-18T10:56:41.05938Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fe0434ae4b6f4b95 elected leader fe0434ae4b6f4b95 at term 2"}
	{"level":"info","ts":"2024-03-18T10:56:41.065965Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"fe0434ae4b6f4b95","local-member-attributes":"{Name:functional-499500 ClientURLs:[https://172.25.151.65:2379]}","request-path":"/0/members/fe0434ae4b6f4b95/attributes","cluster-id":"ac6eee70a9f977b3","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-18T10:56:41.066278Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T10:56:41.067797Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-18T10:56:41.069986Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T10:56:41.075691Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T10:56:41.077111Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.25.151.65:2379"}
	{"level":"info","ts":"2024-03-18T10:56:41.078733Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T10:56:41.079029Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-18T10:56:41.079391Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ac6eee70a9f977b3","local-member-id":"fe0434ae4b6f4b95","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T10:56:41.079686Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T10:56:41.080751Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T10:59:03.8354Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-03-18T10:59:03.835449Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"functional-499500","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://172.25.151.65:2380"],"advertise-client-urls":["https://172.25.151.65:2379"]}
	{"level":"warn","ts":"2024-03-18T10:59:03.835644Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-18T10:59:03.835803Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-18T10:59:03.911619Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 172.25.151.65:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-18T10:59:03.911703Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 172.25.151.65:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-18T10:59:03.911827Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"fe0434ae4b6f4b95","current-leader-member-id":"fe0434ae4b6f4b95"}
	{"level":"info","ts":"2024-03-18T10:59:03.921953Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"172.25.151.65:2380"}
	{"level":"info","ts":"2024-03-18T10:59:03.922097Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"172.25.151.65:2380"}
	{"level":"info","ts":"2024-03-18T10:59:03.922125Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"functional-499500","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://172.25.151.65:2380"],"advertise-client-urls":["https://172.25.151.65:2379"]}
	
	
	==> etcd [829b21257daa] <==
	{"level":"info","ts":"2024-03-18T10:59:24.773208Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe0434ae4b6f4b95 received MsgPreVoteResp from fe0434ae4b6f4b95 at term 2"}
	{"level":"info","ts":"2024-03-18T10:59:24.773503Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe0434ae4b6f4b95 became candidate at term 3"}
	{"level":"info","ts":"2024-03-18T10:59:24.773767Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe0434ae4b6f4b95 received MsgVoteResp from fe0434ae4b6f4b95 at term 3"}
	{"level":"info","ts":"2024-03-18T10:59:24.774313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe0434ae4b6f4b95 became leader at term 3"}
	{"level":"info","ts":"2024-03-18T10:59:24.774897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fe0434ae4b6f4b95 elected leader fe0434ae4b6f4b95 at term 3"}
	{"level":"info","ts":"2024-03-18T10:59:24.790898Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"fe0434ae4b6f4b95","local-member-attributes":"{Name:functional-499500 ClientURLs:[https://172.25.151.65:2379]}","request-path":"/0/members/fe0434ae4b6f4b95/attributes","cluster-id":"ac6eee70a9f977b3","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-18T10:59:24.791102Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T10:59:24.791131Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T10:59:24.791667Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T10:59:24.817328Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-18T10:59:24.825652Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-18T10:59:24.87863Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.25.151.65:2379"}
	{"level":"info","ts":"2024-03-18T11:00:12.232407Z","caller":"traceutil/trace.go:171","msg":"trace[744564116] transaction","detail":"{read_only:false; response_revision:645; number_of_response:1; }","duration":"100.885186ms","start":"2024-03-18T11:00:12.131491Z","end":"2024-03-18T11:00:12.232376Z","steps":["trace[744564116] 'process raft request'  (duration: 100.021189ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T11:00:18.990166Z","caller":"traceutil/trace.go:171","msg":"trace[390308953] transaction","detail":"{read_only:false; response_revision:648; number_of_response:1; }","duration":"697.620008ms","start":"2024-03-18T11:00:18.292512Z","end":"2024-03-18T11:00:18.990132Z","steps":["trace[390308953] 'process raft request'  (duration: 697.493208ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T11:00:18.991099Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T11:00:18.292499Z","time spent":"697.860608ms","remote":"127.0.0.1:34938","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:647 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-03-18T11:00:19.01157Z","caller":"traceutil/trace.go:171","msg":"trace[382351032] linearizableReadLoop","detail":"{readStateIndex:713; appliedIndex:711; }","duration":"183.928641ms","start":"2024-03-18T11:00:18.827628Z","end":"2024-03-18T11:00:19.011557Z","steps":["trace[382351032] 'read index received'  (duration: 164.366866ms)","trace[382351032] 'applied index is now lower than readState.Index'  (duration: 19.561275ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-18T11:00:19.011738Z","caller":"traceutil/trace.go:171","msg":"trace[799968064] transaction","detail":"{read_only:false; response_revision:649; number_of_response:1; }","duration":"599.66735ms","start":"2024-03-18T11:00:18.412013Z","end":"2024-03-18T11:00:19.01168Z","steps":["trace[799968064] 'process raft request'  (duration: 599.41905ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T11:00:19.012156Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T11:00:18.41199Z","time spent":"600.11935ms","remote":"127.0.0.1:35070","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":557,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/functional-499500\" mod_revision:641 > success:<request_put:<key:\"/registry/leases/kube-node-lease/functional-499500\" value_size:499 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/functional-499500\" > >"}
	{"level":"info","ts":"2024-03-18T11:00:19.0125Z","caller":"traceutil/trace.go:171","msg":"trace[1178427216] transaction","detail":"{read_only:false; response_revision:650; number_of_response:1; }","duration":"531.217847ms","start":"2024-03-18T11:00:18.48127Z","end":"2024-03-18T11:00:19.012488Z","steps":["trace[1178427216] 'process raft request'  (duration: 530.256448ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T11:00:19.012801Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T11:00:18.481249Z","time spent":"531.520047ms","remote":"127.0.0.1:35070","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":682,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-o5vgc6rutsym65vmas4xgjqife\" mod_revision:642 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-o5vgc6rutsym65vmas4xgjqife\" value_size:609 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-o5vgc6rutsym65vmas4xgjqife\" > >"}
	{"level":"warn","ts":"2024-03-18T11:00:19.013267Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.706539ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/172.25.151.65\" ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2024-03-18T11:00:19.013725Z","caller":"traceutil/trace.go:171","msg":"trace[991884091] range","detail":"{range_begin:/registry/masterleases/172.25.151.65; range_end:; response_count:1; response_revision:650; }","duration":"186.164239ms","start":"2024-03-18T11:00:18.827551Z","end":"2024-03-18T11:00:19.013715Z","steps":["trace[991884091] 'agreement among raft nodes before linearized reading'  (duration: 185.680839ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T11:00:21.996684Z","caller":"traceutil/trace.go:171","msg":"trace[1352129043] transaction","detail":"{read_only:false; response_revision:652; number_of_response:1; }","duration":"994.45537ms","start":"2024-03-18T11:00:21.002209Z","end":"2024-03-18T11:00:21.996664Z","steps":["trace[1352129043] 'process raft request'  (duration: 993.250671ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T11:00:21.997113Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T11:00:21.002193Z","time spent":"994.80207ms","remote":"127.0.0.1:34938","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:648 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-03-18T11:00:44.39594Z","caller":"traceutil/trace.go:171","msg":"trace[2083239901] transaction","detail":"{read_only:false; response_revision:670; number_of_response:1; }","duration":"143.910722ms","start":"2024-03-18T11:00:44.251957Z","end":"2024-03-18T11:00:44.395868Z","steps":["trace[2083239901] 'process raft request'  (duration: 143.722321ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:01:41 up 7 min,  0 users,  load average: 0.35, 0.52, 0.27
	Linux functional-499500 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [426300e81a5e] <==
	I0318 10:59:28.178670       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0318 10:59:28.841500       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.25.151.65]
	I0318 10:59:28.843732       1 controller.go:624] quota admission added evaluator for: endpoints
	I0318 10:59:28.860696       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0318 10:59:29.233242       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0318 10:59:29.265822       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0318 10:59:29.457232       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0318 10:59:29.531690       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0318 10:59:29.571786       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0318 11:00:18.992562       1 trace.go:236] Trace[29665905]: "Update" accept:application/json, */*,audit-id:becafda8-f130-4148-87f2-d77c88e0cf3a,client:172.25.151.65,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (18-Mar-2024 11:00:18.290) (total time: 701ms):
	Trace[29665905]: ["GuaranteedUpdate etcd3" audit-id:becafda8-f130-4148-87f2-d77c88e0cf3a,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 701ms (11:00:18.290)
	Trace[29665905]:  ---"Txn call completed" 700ms (11:00:18.992)]
	Trace[29665905]: [701.635102ms] [701.635102ms] END
	I0318 11:00:19.014199       1 trace.go:236] Trace[1002727047]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:86b99f2a-e085-4448-9d75-7366bc9f976d,client:172.25.151.65,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-499500,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PUT (18-Mar-2024 11:00:18.409) (total time: 604ms):
	Trace[1002727047]: ["GuaranteedUpdate etcd3" audit-id:86b99f2a-e085-4448-9d75-7366bc9f976d,key:/leases/kube-node-lease/functional-499500,type:*coordination.Lease,resource:leases.coordination.k8s.io 604ms (11:00:18.410)
	Trace[1002727047]:  ---"Txn call completed" 602ms (11:00:19.013)]
	Trace[1002727047]: [604.291344ms] [604.291344ms] END
	I0318 11:00:19.016468       1 trace.go:236] Trace[496692906]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:e48f54d2-f3f1-47cc-8992-95df3fb1e666,client:127.0.0.1,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/apiserver-o5vgc6rutsym65vmas4xgjqife,user-agent:kube-apiserver/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PUT (18-Mar-2024 11:00:18.479) (total time: 536ms):
	Trace[496692906]: ["GuaranteedUpdate etcd3" audit-id:e48f54d2-f3f1-47cc-8992-95df3fb1e666,key:/leases/kube-system/apiserver-o5vgc6rutsym65vmas4xgjqife,type:*coordination.Lease,resource:leases.coordination.k8s.io 536ms (11:00:18.479)
	Trace[496692906]:  ---"Txn call completed" 535ms (11:00:19.016)]
	Trace[496692906]: [536.677841ms] [536.677841ms] END
	I0318 11:00:21.998469       1 trace.go:236] Trace[1024325257]: "Update" accept:application/json, */*,audit-id:0fa223a5-1f2a-41d7-8c18-39def33064fc,client:172.25.151.65,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (18-Mar-2024 11:00:21.000) (total time: 998ms):
	Trace[1024325257]: ["GuaranteedUpdate etcd3" audit-id:0fa223a5-1f2a-41d7-8c18-39def33064fc,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 997ms (11:00:21.000)
	Trace[1024325257]:  ---"Txn call completed" 996ms (11:00:21.998)]
	Trace[1024325257]: [998.234767ms] [998.234767ms] END
	
	
	==> kube-apiserver [fe08297df9b1] <==
	W0318 10:59:12.881644       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 10:59:12.881787       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 10:59:12.894039       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 10:59:12.918486       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 10:59:12.927095       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 10:59:13.030438       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 10:59:13.037855       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 10:59:13.050959       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 10:59:13.105400       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 10:59:13.230609       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 10:59:13.287819       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 10:59:13.348539       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 10:59:13.362050       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 10:59:13.405220       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 10:59:13.422661       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 10:59:13.428881       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 10:59:13.447214       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 10:59:13.455532       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 10:59:13.506554       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 10:59:13.564373       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 10:59:13.638699       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 10:59:13.642507       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 10:59:13.657602       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 10:59:13.670792       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 10:59:13.771704       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [11d0a107fa0c] <==
	I0318 10:59:40.407429       1 shared_informer.go:318] Caches are synced for taint
	I0318 10:59:40.407820       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0318 10:59:40.408191       1 taint_manager.go:210] "Sending events to api server"
	I0318 10:59:40.408413       1 shared_informer.go:318] Caches are synced for expand
	I0318 10:59:40.408782       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0318 10:59:40.412014       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-499500"
	I0318 10:59:40.412213       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0318 10:59:40.412447       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0318 10:59:40.409270       1 event.go:307] "Event occurred" object="functional-499500" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-499500 event: Registered Node functional-499500 in Controller"
	I0318 10:59:40.427492       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0318 10:59:40.460361       1 shared_informer.go:318] Caches are synced for stateful set
	I0318 10:59:40.491942       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 10:59:40.498890       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0318 10:59:40.500389       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0318 10:59:40.502794       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0318 10:59:40.504992       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0318 10:59:40.513818       1 shared_informer.go:318] Caches are synced for deployment
	I0318 10:59:40.531697       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0318 10:59:40.538532       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 10:59:40.545321       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0318 10:59:40.546098       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="243.897µs"
	I0318 10:59:40.563263       1 shared_informer.go:318] Caches are synced for disruption
	I0318 10:59:40.908600       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 10:59:40.985825       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 10:59:40.985876       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	
	==> kube-controller-manager [d3608688bd85] <==
	I0318 10:57:00.108242       1 shared_informer.go:318] Caches are synced for stateful set
	I0318 10:57:00.144886       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rm8c5"
	I0318 10:57:00.219835       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I0318 10:57:00.370774       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-8xfzd"
	I0318 10:57:00.492952       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 10:57:00.493325       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0318 10:57:00.513403       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-k65n6"
	I0318 10:57:00.517557       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 10:57:00.609427       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="400.38314ms"
	I0318 10:57:00.691251       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="80.533376ms"
	I0318 10:57:00.696605       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="87.701µs"
	I0318 10:57:00.842145       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="86.801µs"
	I0318 10:57:02.856911       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0318 10:57:02.906266       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-8xfzd"
	I0318 10:57:02.934071       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="75.786409ms"
	I0318 10:57:02.955538       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="20.589022ms"
	I0318 10:57:02.955735       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="92.798µs"
	I0318 10:57:03.185294       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="72.3µs"
	I0318 10:57:03.228606       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="65.8µs"
	I0318 10:57:13.086148       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="66.195µs"
	I0318 10:57:13.487690       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="212.985µs"
	I0318 10:57:13.538956       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="50.996µs"
	I0318 10:57:13.545144       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="96.993µs"
	I0318 10:57:40.945039       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="20.796931ms"
	I0318 10:57:40.946137       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="89.995µs"
	
	
	==> kube-proxy [aa0ce1c41429] <==
	I0318 10:57:02.257828       1 server_others.go:69] "Using iptables proxy"
	I0318 10:57:02.302704       1 node.go:141] Successfully retrieved node IP: 172.25.151.65
	I0318 10:57:02.516910       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 10:57:02.516941       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 10:57:02.605145       1 server_others.go:152] "Using iptables Proxier"
	I0318 10:57:02.605205       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 10:57:02.605426       1 server.go:846] "Version info" version="v1.28.4"
	I0318 10:57:02.605438       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 10:57:02.611751       1 config.go:188] "Starting service config controller"
	I0318 10:57:02.662102       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 10:57:02.662124       1 shared_informer.go:318] Caches are synced for service config
	I0318 10:57:02.617329       1 config.go:97] "Starting endpoint slice config controller"
	I0318 10:57:02.662183       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 10:57:02.662190       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 10:57:02.617814       1 config.go:315] "Starting node config controller"
	I0318 10:57:02.662561       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 10:57:02.662589       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [af70d684fae7] <==
	I0318 10:59:30.013467       1 server_others.go:69] "Using iptables proxy"
	I0318 10:59:30.059979       1 node.go:141] Successfully retrieved node IP: 172.25.151.65
	I0318 10:59:30.135448       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 10:59:30.135702       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 10:59:30.141461       1 server_others.go:152] "Using iptables Proxier"
	I0318 10:59:30.141779       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 10:59:30.142344       1 server.go:846] "Version info" version="v1.28.4"
	I0318 10:59:30.143015       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 10:59:30.144224       1 config.go:188] "Starting service config controller"
	I0318 10:59:30.144398       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 10:59:30.144444       1 config.go:97] "Starting endpoint slice config controller"
	I0318 10:59:30.144532       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 10:59:30.145678       1 config.go:315] "Starting node config controller"
	I0318 10:59:30.145761       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 10:59:30.245130       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 10:59:30.245203       1 shared_informer.go:318] Caches are synced for service config
	I0318 10:59:30.246191       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [57d1f9fbe10b] <==
	I0318 10:59:25.186056       1 serving.go:348] Generated self-signed cert in-memory
	W0318 10:59:27.288745       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0318 10:59:27.288824       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 10:59:27.288841       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0318 10:59:27.288853       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 10:59:27.380108       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0318 10:59:27.380689       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 10:59:27.392677       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0318 10:59:27.393358       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 10:59:27.393491       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 10:59:27.399133       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 10:59:27.503463       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [ff1e361f0d67] <==
	E0318 10:56:44.451841       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0318 10:56:44.559240       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0318 10:56:44.559273       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0318 10:56:44.570462       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0318 10:56:44.570755       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0318 10:56:44.592386       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0318 10:56:44.592414       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0318 10:56:44.716935       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 10:56:44.717346       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 10:56:44.809854       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 10:56:44.810305       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0318 10:56:44.920349       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0318 10:56:44.920410       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0318 10:56:44.927336       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0318 10:56:44.929444       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0318 10:56:44.934740       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 10:56:44.934783       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 10:56:44.980588       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 10:56:44.980939       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0318 10:56:44.993040       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0318 10:56:44.993258       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0318 10:56:45.013785       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0318 10:56:45.014040       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0318 10:56:47.003311       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0318 10:59:03.735947       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Mar 18 10:59:28 functional-499500 kubelet[5944]: I0318 10:59:28.195651    5944 topology_manager.go:215] "Topology Admit Handler" podUID="ed875552-beb2-4ba2-9347-76450b017fa2" podNamespace="kube-system" podName="kube-proxy-rm8c5"
	Mar 18 10:59:28 functional-499500 kubelet[5944]: I0318 10:59:28.195830    5944 topology_manager.go:215] "Topology Admit Handler" podUID="8c751c63-0f7c-44a2-a6ae-f56aca9513fd" podNamespace="kube-system" podName="storage-provisioner"
	Mar 18 10:59:28 functional-499500 kubelet[5944]: I0318 10:59:28.202812    5944 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 18 10:59:28 functional-499500 kubelet[5944]: I0318 10:59:28.251673    5944 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8c751c63-0f7c-44a2-a6ae-f56aca9513fd-tmp\") pod \"storage-provisioner\" (UID: \"8c751c63-0f7c-44a2-a6ae-f56aca9513fd\") " pod="kube-system/storage-provisioner"
	Mar 18 10:59:28 functional-499500 kubelet[5944]: I0318 10:59:28.252247    5944 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed875552-beb2-4ba2-9347-76450b017fa2-lib-modules\") pod \"kube-proxy-rm8c5\" (UID: \"ed875552-beb2-4ba2-9347-76450b017fa2\") " pod="kube-system/kube-proxy-rm8c5"
	Mar 18 10:59:28 functional-499500 kubelet[5944]: I0318 10:59:28.252393    5944 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed875552-beb2-4ba2-9347-76450b017fa2-xtables-lock\") pod \"kube-proxy-rm8c5\" (UID: \"ed875552-beb2-4ba2-9347-76450b017fa2\") " pod="kube-system/kube-proxy-rm8c5"
	Mar 18 10:59:29 functional-499500 kubelet[5944]: I0318 10:59:29.567861    5944 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a072d6f4ed2b78ea4aedb30abc5a6941d128df1aaead5477357f14db5e6de5b8"
	Mar 18 10:59:29 functional-499500 kubelet[5944]: I0318 10:59:29.921464    5944 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ee07f0ccafdd32a96a6e6aa5208b27d59fff42830a30ea944b6e923bcd47755"
	Mar 18 10:59:29 functional-499500 kubelet[5944]: I0318 10:59:29.993903    5944 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2b86e3b150bc1e03df7d61b35167eb2d2e55f34e1260f35a4e384ff2e57b771"
	Mar 18 10:59:29 functional-499500 kubelet[5944]: I0318 10:59:29.994231    5944 scope.go:117] "RemoveContainer" containerID="e6279db69344099860e18d25151e4667227a5ee419ca18c52630d483309ff6af"
	Mar 18 10:59:29 functional-499500 kubelet[5944]: E0318 10:59:29.996606    5944 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8c751c63-0f7c-44a2-a6ae-f56aca9513fd)\"" pod="kube-system/storage-provisioner" podUID="8c751c63-0f7c-44a2-a6ae-f56aca9513fd"
	Mar 18 10:59:31 functional-499500 kubelet[5944]: I0318 10:59:31.022789    5944 scope.go:117] "RemoveContainer" containerID="848095ad1ab345facdc058937f5f76e1a76a5259ba0b7276b7e5fcc2f882e290"
	Mar 18 10:59:31 functional-499500 kubelet[5944]: I0318 10:59:31.023215    5944 scope.go:117] "RemoveContainer" containerID="e6279db69344099860e18d25151e4667227a5ee419ca18c52630d483309ff6af"
	Mar 18 10:59:31 functional-499500 kubelet[5944]: E0318 10:59:31.023515    5944 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8c751c63-0f7c-44a2-a6ae-f56aca9513fd)\"" pod="kube-system/storage-provisioner" podUID="8c751c63-0f7c-44a2-a6ae-f56aca9513fd"
	Mar 18 10:59:42 functional-499500 kubelet[5944]: I0318 10:59:42.212266    5944 scope.go:117] "RemoveContainer" containerID="e6279db69344099860e18d25151e4667227a5ee419ca18c52630d483309ff6af"
	Mar 18 11:00:22 functional-499500 kubelet[5944]: E0318 11:00:22.255548    5944 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 11:00:22 functional-499500 kubelet[5944]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 11:00:22 functional-499500 kubelet[5944]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 11:00:22 functional-499500 kubelet[5944]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 11:00:22 functional-499500 kubelet[5944]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 11:01:22 functional-499500 kubelet[5944]: E0318 11:01:22.240861    5944 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 11:01:22 functional-499500 kubelet[5944]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 11:01:22 functional-499500 kubelet[5944]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 11:01:22 functional-499500 kubelet[5944]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 11:01:22 functional-499500 kubelet[5944]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [7897fad71287] <==
	I0318 10:59:42.575917       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0318 10:59:42.595150       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0318 10:59:42.595213       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0318 11:00:00.015128       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0318 11:00:00.015726       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-499500_a39a41ab-ae2c-44ae-b3c7-c305b5bf2f42!
	I0318 11:00:00.017143       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"73693536-c2bd-46b5-8e6d-0982bb80b178", APIVersion:"v1", ResourceVersion:"634", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-499500_a39a41ab-ae2c-44ae-b3c7-c305b5bf2f42 became leader
	I0318 11:00:00.116601       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-499500_a39a41ab-ae2c-44ae-b3c7-c305b5bf2f42!
	
	
	==> storage-provisioner [e6279db69344] <==
	I0318 10:59:29.813111       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0318 10:59:29.815789       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 11:01:33.109292   10572 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-499500 -n functional-499500
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-499500 -n functional-499500: (12.3714858s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-499500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (35.28s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-499500 config unset cpus" to be -""- but got *"W0318 11:04:49.262548    2272 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-499500 config get cpus: exit status 14 (277.672ms)

                                                
                                                
** stderr ** 
	W0318 11:04:49.590553    5284 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-499500 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0318 11:04:49.590553    5284 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 config set cpus 2
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-499500 config set cpus 2" to be -"! These changes will take effect upon a minikube delete and then a minikube start"- but got *"W0318 11:04:49.864462    4472 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n! These changes will take effect upon a minikube delete and then a minikube start"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 config get cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-499500 config get cpus" to be -""- but got *"W0318 11:04:50.182445    3756 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-499500 config unset cpus" to be -""- but got *"W0318 11:04:50.475441   11032 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-499500 config get cpus: exit status 14 (301.2317ms)

                                                
                                                
** stderr ** 
	W0318 11:04:50.805116    6012 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-499500 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0318 11:04:50.805116    6012 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
--- FAIL: TestFunctional/parallel/ConfigCmd (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-499500 service --namespace=default --https --url hello-node: exit status 1 (15.0254075s)

                                                
                                                
** stderr ** 
	W0318 11:05:38.755252    4948 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1507: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-499500 service --namespace=default --https --url hello-node" : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (15.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-499500 service hello-node --url --format={{.IP}}: exit status 1 (15.0390964s)

                                                
                                                
** stderr ** 
	W0318 11:05:53.716278    5940 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-499500 service hello-node --url --format={{.IP}}": exit status 1
functional_test.go:1544: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (15.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-499500 service hello-node --url: exit status 1 (15.031899s)

                                                
                                                
** stderr ** 
	W0318 11:06:08.729317    1092 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1557: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-499500 service hello-node --url": exit status 1
functional_test.go:1561: found endpoint for hello-node: 
functional_test.go:1569: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (15.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (70.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-606900 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-606900 -- exec busybox-5b5d89c9d6-bsmjb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-606900 -- exec busybox-5b5d89c9d6-bsmjb -- sh -c "ping -c 1 172.25.144.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-606900 -- exec busybox-5b5d89c9d6-bsmjb -- sh -c "ping -c 1 172.25.144.1": exit status 1 (10.5187363s)

                                                
                                                
-- stdout --
	PING 172.25.144.1 (172.25.144.1): 56 data bytes
	
	--- 172.25.144.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 11:25:50.001559    1864 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.25.144.1) from pod (busybox-5b5d89c9d6-bsmjb): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-606900 -- exec busybox-5b5d89c9d6-cqzzh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-606900 -- exec busybox-5b5d89c9d6-cqzzh -- sh -c "ping -c 1 172.25.144.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-606900 -- exec busybox-5b5d89c9d6-cqzzh -- sh -c "ping -c 1 172.25.144.1": exit status 1 (10.5456075s)

                                                
                                                
-- stdout --
	PING 172.25.144.1 (172.25.144.1): 56 data bytes
	
	--- 172.25.144.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 11:26:01.149820   12956 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.25.144.1) from pod (busybox-5b5d89c9d6-cqzzh): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-606900 -- exec busybox-5b5d89c9d6-qdlmz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-606900 -- exec busybox-5b5d89c9d6-qdlmz -- sh -c "ping -c 1 172.25.144.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-606900 -- exec busybox-5b5d89c9d6-qdlmz -- sh -c "ping -c 1 172.25.144.1": exit status 1 (10.5365171s)

                                                
                                                
-- stdout --
	PING 172.25.144.1 (172.25.144.1): 56 data bytes
	
	--- 172.25.144.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 11:26:12.234824    8900 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.25.144.1) from pod (busybox-5b5d89c9d6-qdlmz): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-606900 -n ha-606900
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-606900 -n ha-606900: (12.6641925s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 logs -n 25: (9.1552264s)
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| kubectl | -p ha-606900 -- get pods -o          | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:25 UTC | 18 Mar 24 11:25 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-606900 -- get pods -o          | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:25 UTC | 18 Mar 24 11:25 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-606900 -- get pods -o          | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:25 UTC | 18 Mar 24 11:25 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-606900 -- get pods -o          | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:25 UTC | 18 Mar 24 11:25 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-606900 -- get pods -o          | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:25 UTC | 18 Mar 24 11:25 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-606900 -- get pods -o          | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:25 UTC | 18 Mar 24 11:25 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-606900 -- get pods -o          | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:25 UTC | 18 Mar 24 11:25 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-606900 -- get pods -o          | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:25 UTC | 18 Mar 24 11:25 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-606900 -- get pods -o          | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:25 UTC | 18 Mar 24 11:25 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-606900 -- exec                 | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:25 UTC | 18 Mar 24 11:25 UTC |
	|         | busybox-5b5d89c9d6-bsmjb --          |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-606900 -- exec                 | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:25 UTC | 18 Mar 24 11:25 UTC |
	|         | busybox-5b5d89c9d6-cqzzh --          |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-606900 -- exec                 | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:25 UTC | 18 Mar 24 11:25 UTC |
	|         | busybox-5b5d89c9d6-qdlmz --          |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-606900 -- exec                 | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:25 UTC | 18 Mar 24 11:25 UTC |
	|         | busybox-5b5d89c9d6-bsmjb --          |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-606900 -- exec                 | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:25 UTC | 18 Mar 24 11:25 UTC |
	|         | busybox-5b5d89c9d6-cqzzh --          |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-606900 -- exec                 | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:25 UTC | 18 Mar 24 11:25 UTC |
	|         | busybox-5b5d89c9d6-qdlmz --          |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-606900 -- exec                 | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:25 UTC | 18 Mar 24 11:25 UTC |
	|         | busybox-5b5d89c9d6-bsmjb -- nslookup |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-606900 -- exec                 | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:25 UTC | 18 Mar 24 11:25 UTC |
	|         | busybox-5b5d89c9d6-cqzzh -- nslookup |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-606900 -- exec                 | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:25 UTC | 18 Mar 24 11:25 UTC |
	|         | busybox-5b5d89c9d6-qdlmz -- nslookup |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-606900 -- get pods -o          | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:25 UTC | 18 Mar 24 11:25 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-606900 -- exec                 | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:25 UTC | 18 Mar 24 11:25 UTC |
	|         | busybox-5b5d89c9d6-bsmjb             |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-606900 -- exec                 | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:25 UTC |                     |
	|         | busybox-5b5d89c9d6-bsmjb -- sh       |           |                   |         |                     |                     |
	|         | -c ping -c 1 172.25.144.1            |           |                   |         |                     |                     |
	| kubectl | -p ha-606900 -- exec                 | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:26 UTC | 18 Mar 24 11:26 UTC |
	|         | busybox-5b5d89c9d6-cqzzh             |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-606900 -- exec                 | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:26 UTC |                     |
	|         | busybox-5b5d89c9d6-cqzzh -- sh       |           |                   |         |                     |                     |
	|         | -c ping -c 1 172.25.144.1            |           |                   |         |                     |                     |
	| kubectl | -p ha-606900 -- exec                 | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:26 UTC | 18 Mar 24 11:26 UTC |
	|         | busybox-5b5d89c9d6-qdlmz             |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-606900 -- exec                 | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:26 UTC |                     |
	|         | busybox-5b5d89c9d6-qdlmz -- sh       |           |                   |         |                     |                     |
	|         | -c ping -c 1 172.25.144.1            |           |                   |         |                     |                     |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 11:12:35
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 11:12:35.830652    6988 out.go:291] Setting OutFile to fd 880 ...
	I0318 11:12:35.830652    6988 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 11:12:35.830652    6988 out.go:304] Setting ErrFile to fd 1084...
	I0318 11:12:35.830652    6988 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 11:12:35.853346    6988 out.go:298] Setting JSON to false
	I0318 11:12:35.856650    6988 start.go:129] hostinfo: {"hostname":"minikube6","uptime":135680,"bootTime":1710624675,"procs":186,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0318 11:12:35.856650    6988 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 11:12:35.862951    6988 out.go:177] * [ha-606900] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	I0318 11:12:35.869346    6988 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0318 11:12:35.869346    6988 notify.go:220] Checking for updates...
	I0318 11:12:35.872190    6988 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 11:12:35.874973    6988 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0318 11:12:35.877697    6988 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 11:12:35.879630    6988 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 11:12:35.883006    6988 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 11:12:41.331610    6988 out.go:177] * Using the hyperv driver based on user configuration
	I0318 11:12:41.336169    6988 start.go:297] selected driver: hyperv
	I0318 11:12:41.336169    6988 start.go:901] validating driver "hyperv" against <nil>
	I0318 11:12:41.336169    6988 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 11:12:41.386045    6988 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 11:12:41.387380    6988 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 11:12:41.387380    6988 cni.go:84] Creating CNI manager for ""
	I0318 11:12:41.387380    6988 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0318 11:12:41.387380    6988 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0318 11:12:41.388101    6988 start.go:340] cluster config:
	{Name:ha-606900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-606900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 11:12:41.388476    6988 iso.go:125] acquiring lock: {Name:mk859ea173f7c19f70b69d7017f4a5a661cd1500 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 11:12:41.391613    6988 out.go:177] * Starting "ha-606900" primary control-plane node in "ha-606900" cluster
	I0318 11:12:41.395867    6988 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 11:12:41.396127    6988 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0318 11:12:41.396200    6988 cache.go:56] Caching tarball of preloaded images
	I0318 11:12:41.396548    6988 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0318 11:12:41.396728    6988 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 11:12:41.396935    6988 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\config.json ...
	I0318 11:12:41.397498    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\config.json: {Name:mk88c122c030bbdaff9f17f92b0a3b058cc8268e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:12:41.398473    6988 start.go:360] acquireMachinesLock for ha-606900: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 11:12:41.398473    6988 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-606900"
	I0318 11:12:41.399069    6988 start.go:93] Provisioning new machine with config: &{Name:ha-606900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.28.4 ClusterName:ha-606900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 11:12:41.399214    6988 start.go:125] createHost starting for "" (driver="hyperv")
	I0318 11:12:41.401877    6988 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 11:12:41.401877    6988 start.go:159] libmachine.API.Create for "ha-606900" (driver="hyperv")
	I0318 11:12:41.401877    6988 client.go:168] LocalClient.Create starting
	I0318 11:12:41.402715    6988 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0318 11:12:41.402897    6988 main.go:141] libmachine: Decoding PEM data...
	I0318 11:12:41.402897    6988 main.go:141] libmachine: Parsing certificate...
	I0318 11:12:41.403172    6988 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0318 11:12:41.403172    6988 main.go:141] libmachine: Decoding PEM data...
	I0318 11:12:41.403172    6988 main.go:141] libmachine: Parsing certificate...
	I0318 11:12:41.403172    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0318 11:12:43.540920    6988 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0318 11:12:43.540920    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:12:43.541042    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0318 11:12:45.362820    6988 main.go:141] libmachine: [stdout =====>] : False
	
	I0318 11:12:45.362820    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:12:45.362820    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0318 11:12:46.887627    6988 main.go:141] libmachine: [stdout =====>] : True
	
	I0318 11:12:46.888371    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:12:46.888493    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0318 11:12:50.575417    6988 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0318 11:12:50.575417    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:12:50.578640    6988 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0318 11:12:51.070101    6988 main.go:141] libmachine: Creating SSH key...
	I0318 11:12:51.160792    6988 main.go:141] libmachine: Creating VM...
	I0318 11:12:51.160792    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0318 11:12:54.035165    6988 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0318 11:12:54.036083    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:12:54.036083    6988 main.go:141] libmachine: Using switch "Default Switch"
	I0318 11:12:54.036083    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0318 11:12:55.837167    6988 main.go:141] libmachine: [stdout =====>] : True
	
	I0318 11:12:56.006013    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:12:56.006271    6988 main.go:141] libmachine: Creating VHD
	I0318 11:12:56.006425    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900\fixed.vhd' -SizeBytes 10MB -Fixed
	I0318 11:12:59.751439    6988 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 01C96E1C-CEF4-4C5C-B17A-6C2519E24BE7
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0318 11:12:59.751439    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:12:59.751439    6988 main.go:141] libmachine: Writing magic tar header
	I0318 11:12:59.752231    6988 main.go:141] libmachine: Writing SSH key tar header
	I0318 11:12:59.764120    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900\disk.vhd' -VHDType Dynamic -DeleteSource
	I0318 11:13:02.936326    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:13:02.936418    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:13:02.936507    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900\disk.vhd' -SizeBytes 20000MB
	I0318 11:13:05.559857    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:13:05.559857    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:13:05.560408    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-606900 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0318 11:13:09.253956    6988 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-606900 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0318 11:13:09.253956    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:13:09.253956    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-606900 -DynamicMemoryEnabled $false
	I0318 11:13:11.531707    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:13:11.531760    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:13:11.531760    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-606900 -Count 2
	I0318 11:13:13.743011    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:13:13.743791    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:13:13.743791    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-606900 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900\boot2docker.iso'
	I0318 11:13:16.406409    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:13:16.406409    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:13:16.406409    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-606900 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900\disk.vhd'
	I0318 11:13:19.101952    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:13:19.102435    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:13:19.102435    6988 main.go:141] libmachine: Starting VM...
	I0318 11:13:19.102637    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-606900
	I0318 11:13:22.206846    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:13:22.206846    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:13:22.206846    6988 main.go:141] libmachine: Waiting for host to start...
	I0318 11:13:22.206846    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:13:24.475079    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:13:24.476078    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:13:24.476078    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:13:27.080284    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:13:27.080426    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:13:28.089986    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:13:30.316981    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:13:30.316981    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:13:30.316981    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:13:32.880486    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:13:32.880563    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:13:33.887545    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:13:36.163932    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:13:36.164049    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:13:36.164114    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:13:38.689956    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:13:38.689956    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:13:39.701591    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:13:41.948552    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:13:41.948843    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:13:41.948843    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:13:44.585824    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:13:44.585824    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:13:45.586379    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:13:47.814899    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:13:47.814899    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:13:47.815571    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:13:50.508581    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:13:50.508581    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:13:50.509682    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:13:52.688509    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:13:52.688509    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:13:52.688509    6988 machine.go:94] provisionDockerMachine start ...
	I0318 11:13:52.689070    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:13:54.855211    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:13:54.855211    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:13:54.855211    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:13:57.437874    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:13:57.437874    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:13:57.444710    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:13:57.455227    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.74 22 <nil> <nil>}
	I0318 11:13:57.455227    6988 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 11:13:57.572975    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 11:13:57.573099    6988 buildroot.go:166] provisioning hostname "ha-606900"
	I0318 11:13:57.573200    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:13:59.696497    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:13:59.696497    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:13:59.697094    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:14:02.260379    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:14:02.260379    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:02.268333    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:14:02.269249    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.74 22 <nil> <nil>}
	I0318 11:14:02.269249    6988 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-606900 && echo "ha-606900" | sudo tee /etc/hostname
	I0318 11:14:02.422346    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-606900
	
	I0318 11:14:02.422560    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:14:04.581810    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:14:04.581948    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:04.581948    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:14:07.223238    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:14:07.223756    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:07.230356    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:14:07.230898    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.74 22 <nil> <nil>}
	I0318 11:14:07.230898    6988 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-606900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-606900/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-606900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 11:14:07.368603    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 11:14:07.368603    6988 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0318 11:14:07.368603    6988 buildroot.go:174] setting up certificates
	I0318 11:14:07.368603    6988 provision.go:84] configureAuth start
	I0318 11:14:07.368603    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:14:09.530602    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:14:09.530602    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:09.530602    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:14:12.097818    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:14:12.098191    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:12.098191    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:14:14.328640    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:14:14.328843    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:14.328975    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:14:16.982410    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:14:16.983030    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:16.983091    6988 provision.go:143] copyHostCerts
	I0318 11:14:16.983091    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0318 11:14:16.983091    6988 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0318 11:14:16.983091    6988 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0318 11:14:16.983876    6988 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0318 11:14:16.984546    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0318 11:14:16.985138    6988 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0318 11:14:16.985138    6988 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0318 11:14:16.985138    6988 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0318 11:14:16.986267    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0318 11:14:16.986832    6988 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0318 11:14:16.986832    6988 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0318 11:14:16.986973    6988 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0318 11:14:16.987907    6988 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-606900 san=[127.0.0.1 172.25.148.74 ha-606900 localhost minikube]
	I0318 11:14:17.067089    6988 provision.go:177] copyRemoteCerts
	I0318 11:14:17.079889    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 11:14:17.080000    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:14:19.266914    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:14:19.266914    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:19.267473    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:14:21.882070    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:14:21.882180    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:21.882601    6988 sshutil.go:53] new ssh client: &{IP:172.25.148.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900\id_rsa Username:docker}
	I0318 11:14:21.996499    6988 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9164682s)
	I0318 11:14:21.996499    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0318 11:14:21.997099    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 11:14:22.046399    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0318 11:14:22.047382    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 11:14:22.097218    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0318 11:14:22.098124    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1196 bytes)
	I0318 11:14:22.145565    6988 provision.go:87] duration metric: took 14.7767048s to configureAuth
	I0318 11:14:22.145565    6988 buildroot.go:189] setting minikube options for container-runtime
	I0318 11:14:22.145741    6988 config.go:182] Loaded profile config "ha-606900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:14:22.145741    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:14:24.325007    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:14:24.325348    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:24.325444    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:14:26.942633    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:14:26.942633    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:26.947961    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:14:26.948851    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.74 22 <nil> <nil>}
	I0318 11:14:26.948851    6988 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0318 11:14:27.084291    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0318 11:14:27.084380    6988 buildroot.go:70] root file system type: tmpfs
	I0318 11:14:27.084669    6988 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0318 11:14:27.084757    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:14:29.264363    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:14:29.265231    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:29.265403    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:14:31.873428    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:14:31.874150    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:31.880927    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:14:31.880927    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.74 22 <nil> <nil>}
	I0318 11:14:31.881633    6988 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0318 11:14:32.039753    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0318 11:14:32.039753    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:14:34.224968    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:14:34.224968    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:34.225711    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:14:36.823813    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:14:36.823813    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:36.829884    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:14:36.830627    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.74 22 <nil> <nil>}
	I0318 11:14:36.830627    6988 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0318 11:14:39.056402    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0318 11:14:39.056402    6988 machine.go:97] duration metric: took 46.3676038s to provisionDockerMachine
	I0318 11:14:39.056402    6988 client.go:171] duration metric: took 1m57.6537933s to LocalClient.Create
	I0318 11:14:39.056402    6988 start.go:167] duration metric: took 1m57.6537933s to libmachine.API.Create "ha-606900"
	I0318 11:14:39.056402    6988 start.go:293] postStartSetup for "ha-606900" (driver="hyperv")
	I0318 11:14:39.056402    6988 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 11:14:39.072210    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 11:14:39.072210    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:14:41.195630    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:14:41.195630    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:41.196046    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:14:43.797959    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:14:43.797959    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:43.798700    6988 sshutil.go:53] new ssh client: &{IP:172.25.148.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900\id_rsa Username:docker}
	I0318 11:14:43.899850    6988 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8276095s)
	I0318 11:14:43.910795    6988 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 11:14:43.918063    6988 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 11:14:43.918063    6988 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0318 11:14:43.918766    6988 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0318 11:14:43.919879    6988 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem -> 91202.pem in /etc/ssl/certs
	I0318 11:14:43.919953    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem -> /etc/ssl/certs/91202.pem
	I0318 11:14:43.931771    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 11:14:43.950302    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem --> /etc/ssl/certs/91202.pem (1708 bytes)
	I0318 11:14:43.998543    6988 start.go:296] duration metric: took 4.9421102s for postStartSetup
	I0318 11:14:44.001483    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:14:46.189672    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:14:46.189672    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:46.189931    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:14:48.760889    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:14:48.760889    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:48.760889    6988 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\config.json ...
	I0318 11:14:48.764149    6988 start.go:128] duration metric: took 2m7.3640319s to createHost
	I0318 11:14:48.764243    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:14:50.921725    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:14:50.922186    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:50.922267    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:14:53.458294    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:14:53.458294    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:53.463531    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:14:53.464054    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.74 22 <nil> <nil>}
	I0318 11:14:53.464266    6988 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 11:14:53.601918    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710760493.600160434
	
	I0318 11:14:53.602033    6988 fix.go:216] guest clock: 1710760493.600160434
	I0318 11:14:53.602033    6988 fix.go:229] Guest: 2024-03-18 11:14:53.600160434 +0000 UTC Remote: 2024-03-18 11:14:48.7641493 +0000 UTC m=+133.117652501 (delta=4.836011134s)
	I0318 11:14:53.602159    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:14:55.731159    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:14:55.731159    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:55.731888    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:14:58.324411    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:14:58.324411    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:58.330605    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:14:58.330781    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.74 22 <nil> <nil>}
	I0318 11:14:58.330781    6988 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1710760493
	I0318 11:14:58.467711    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Mar 18 11:14:53 UTC 2024
	
	I0318 11:14:58.467711    6988 fix.go:236] clock set: Mon Mar 18 11:14:53 UTC 2024
	 (err=<nil>)
	I0318 11:14:58.467711    6988 start.go:83] releasing machines lock for "ha-606900", held for 2m17.0683849s
	I0318 11:14:58.468253    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:15:00.626638    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:15:00.626638    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:15:00.627358    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:15:03.172558    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:15:03.172737    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:15:03.176407    6988 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 11:15:03.176944    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:15:03.187563    6988 ssh_runner.go:195] Run: cat /version.json
	I0318 11:15:03.187563    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:15:05.429669    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:15:05.430027    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:15:05.429669    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:15:05.430027    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:15:05.430027    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:15:05.430027    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:15:08.164238    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:15:08.164425    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:15:08.165138    6988 sshutil.go:53] new ssh client: &{IP:172.25.148.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900\id_rsa Username:docker}
	I0318 11:15:08.192916    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:15:08.192916    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:15:08.192916    6988 sshutil.go:53] new ssh client: &{IP:172.25.148.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900\id_rsa Username:docker}
	I0318 11:15:08.378416    6988 ssh_runner.go:235] Completed: cat /version.json: (5.1908208s)
	I0318 11:15:08.378416    6988 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2019766s)
	I0318 11:15:08.392455    6988 ssh_runner.go:195] Run: systemctl --version
	I0318 11:15:08.415790    6988 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 11:15:08.426446    6988 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 11:15:08.437897    6988 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 11:15:08.470268    6988 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 11:15:08.470268    6988 start.go:494] detecting cgroup driver to use...
	I0318 11:15:08.470268    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 11:15:08.521885    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0318 11:15:08.560519    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0318 11:15:08.582589    6988 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0318 11:15:08.594793    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0318 11:15:08.629378    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 11:15:08.663294    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0318 11:15:08.695346    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 11:15:08.728361    6988 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 11:15:08.761540    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0318 11:15:08.798837    6988 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 11:15:08.833460    6988 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 11:15:08.865173    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:15:09.076059    6988 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0318 11:15:09.111465    6988 start.go:494] detecting cgroup driver to use...
	I0318 11:15:09.123526    6988 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0318 11:15:09.164500    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 11:15:09.200792    6988 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 11:15:09.247572    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 11:15:09.285733    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 11:15:09.324017    6988 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0318 11:15:09.396860    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 11:15:09.423438    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 11:15:09.473183    6988 ssh_runner.go:195] Run: which cri-dockerd
	I0318 11:15:09.495100    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0318 11:15:09.514542    6988 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0318 11:15:09.562129    6988 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0318 11:15:09.768756    6988 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0318 11:15:09.962207    6988 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0318 11:15:09.962469    6988 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0318 11:15:10.009364    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:15:10.229626    6988 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 11:15:12.807677    6988 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.578035s)
	I0318 11:15:12.822604    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0318 11:15:12.863840    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 11:15:12.902801    6988 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0318 11:15:13.113512    6988 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0318 11:15:13.323879    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:15:13.544907    6988 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0318 11:15:13.589406    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 11:15:13.627073    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:15:13.860016    6988 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0318 11:15:13.984295    6988 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0318 11:15:13.997606    6988 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0318 11:15:14.006456    6988 start.go:562] Will wait 60s for crictl version
	I0318 11:15:14.018351    6988 ssh_runner.go:195] Run: which crictl
	I0318 11:15:14.038735    6988 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 11:15:14.123965    6988 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.4
	RuntimeApiVersion:  v1
	I0318 11:15:14.134375    6988 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 11:15:14.180338    6988 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 11:15:14.219031    6988 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	I0318 11:15:14.219117    6988 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0318 11:15:14.223489    6988 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0318 11:15:14.223489    6988 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0318 11:15:14.223489    6988 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0318 11:15:14.223489    6988 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:ae:0d:2c Flags:up|broadcast|multicast|running}
	I0318 11:15:14.225997    6988 ip.go:210] interface addr: fe80::f8a6:d6b6:cc4:1ba0/64
	I0318 11:15:14.225997    6988 ip.go:210] interface addr: 172.25.144.1/20
	I0318 11:15:14.236688    6988 ssh_runner.go:195] Run: grep 172.25.144.1	host.minikube.internal$ /etc/hosts
	I0318 11:15:14.244226    6988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 11:15:14.282730    6988 kubeadm.go:877] updating cluster {Name:ha-606900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4
ClusterName:ha-606900 Namespace:default APIServerHAVIP:172.25.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.148.74 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 11:15:14.282730    6988 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 11:15:14.291096    6988 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 11:15:14.317091    6988 docker.go:685] Got preloaded images: 
	I0318 11:15:14.317091    6988 docker.go:691] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0318 11:15:14.329604    6988 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0318 11:15:14.361319    6988 ssh_runner.go:195] Run: which lz4
	I0318 11:15:14.367641    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0318 11:15:14.382177    6988 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 11:15:14.390907    6988 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 11:15:14.391708    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I0318 11:15:16.867361    6988 docker.go:649] duration metric: took 2.4993461s to copy over tarball
	I0318 11:15:16.879350    6988 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 11:15:27.213341    6988 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (10.333927s)
	I0318 11:15:27.213341    6988 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 11:15:27.289626    6988 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0318 11:15:27.311710    6988 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0318 11:15:27.358685    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:15:27.584081    6988 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 11:15:30.940835    6988 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3567329s)
	I0318 11:15:30.950523    6988 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 11:15:30.977200    6988 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0318 11:15:30.977200    6988 cache_images.go:84] Images are preloaded, skipping loading
	I0318 11:15:30.977200    6988 kubeadm.go:928] updating node { 172.25.148.74 8443 v1.28.4 docker true true} ...
	I0318 11:15:30.977634    6988 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-606900 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.148.74
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-606900 Namespace:default APIServerHAVIP:172.25.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 11:15:30.985615    6988 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0318 11:15:31.023676    6988 cni.go:84] Creating CNI manager for ""
	I0318 11:15:31.023676    6988 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0318 11:15:31.023676    6988 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 11:15:31.023676    6988 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.25.148.74 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-606900 NodeName:ha-606900 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.25.148.74"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.25.148.74 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 11:15:31.024296    6988 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.25.148.74
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-606900"
	  kubeletExtraArgs:
	    node-ip: 172.25.148.74
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.25.148.74"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 11:15:31.024441    6988 kube-vip.go:111] generating kube-vip config ...
	I0318 11:15:31.036223    6988 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0318 11:15:31.063438    6988 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0318 11:15:31.064129    6988 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.25.159.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0318 11:15:31.077227    6988 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 11:15:31.095777    6988 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 11:15:31.108183    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0318 11:15:31.131556    6988 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0318 11:15:31.166630    6988 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 11:15:31.203761    6988 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0318 11:15:31.236134    6988 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0318 11:15:31.278196    6988 ssh_runner.go:195] Run: grep 172.25.159.254	control-plane.minikube.internal$ /etc/hosts
	I0318 11:15:31.284172    6988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.159.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 11:15:31.318863    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:15:31.524915    6988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 11:15:31.556536    6988 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900 for IP: 172.25.148.74
	I0318 11:15:31.556573    6988 certs.go:194] generating shared ca certs ...
	I0318 11:15:31.556573    6988 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:15:31.557159    6988 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0318 11:15:31.558254    6988 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0318 11:15:31.558476    6988 certs.go:256] generating profile certs ...
	I0318 11:15:31.559161    6988 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\client.key
	I0318 11:15:31.559246    6988 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\client.crt with IP's: []
	I0318 11:15:31.855026    6988 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\client.crt ...
	I0318 11:15:31.855026    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\client.crt: {Name:mkb17df9dd67cb5dcc5adc34992716fbc04b8b41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:15:31.856802    6988 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\client.key ...
	I0318 11:15:31.856802    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\client.key: {Name:mk23cfdf6c2c724d42b5d3e35a4719ab96f3e140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:15:31.858111    6988 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.key.cd5b8555
	I0318 11:15:31.858111    6988 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.crt.cd5b8555 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.148.74 172.25.159.254]
	I0318 11:15:31.987981    6988 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.crt.cd5b8555 ...
	I0318 11:15:31.987981    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.crt.cd5b8555: {Name:mkfbbb9130ee551e1f450e55254fc02f385a5205 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:15:31.988936    6988 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.key.cd5b8555 ...
	I0318 11:15:31.988936    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.key.cd5b8555: {Name:mk12dab8169c7d001cf37e7db396005c581d5ae9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:15:31.989971    6988 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.crt.cd5b8555 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.crt
	I0318 11:15:32.000772    6988 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.key.cd5b8555 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.key
	I0318 11:15:32.002943    6988 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\proxy-client.key
	I0318 11:15:32.002943    6988 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\proxy-client.crt with IP's: []
	I0318 11:15:32.265600    6988 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\proxy-client.crt ...
	I0318 11:15:32.265600    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\proxy-client.crt: {Name:mk61a0234249ded1fb4e50a22d45917a2fce202a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:15:32.266622    6988 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\proxy-client.key ...
	I0318 11:15:32.266622    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\proxy-client.key: {Name:mk3980679371cd64440cd7f69688f3489b72c0da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:15:32.267424    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 11:15:32.268442    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0318 11:15:32.268798    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 11:15:32.268926    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 11:15:32.269125    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 11:15:32.269125    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 11:15:32.269423    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 11:15:32.277703    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 11:15:32.278690    6988 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9120.pem (1338 bytes)
	W0318 11:15:32.279253    6988 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9120_empty.pem, impossibly tiny 0 bytes
	I0318 11:15:32.279253    6988 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0318 11:15:32.279685    6988 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0318 11:15:32.279830    6988 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0318 11:15:32.280073    6988 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0318 11:15:32.280686    6988 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem (1708 bytes)
	I0318 11:15:32.280686    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem -> /usr/share/ca-certificates/91202.pem
	I0318 11:15:32.280686    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:15:32.280686    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9120.pem -> /usr/share/ca-certificates/9120.pem
	I0318 11:15:32.281734    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 11:15:32.333780    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 11:15:32.383075    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 11:15:32.435769    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 11:15:32.491944    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 11:15:32.542185    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 11:15:32.592888    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 11:15:32.640426    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 11:15:32.685142    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem --> /usr/share/ca-certificates/91202.pem (1708 bytes)
	I0318 11:15:32.728594    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 11:15:32.772436    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9120.pem --> /usr/share/ca-certificates/9120.pem (1338 bytes)
	I0318 11:15:32.820527    6988 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 11:15:32.865090    6988 ssh_runner.go:195] Run: openssl version
	I0318 11:15:32.885993    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/91202.pem && ln -fs /usr/share/ca-certificates/91202.pem /etc/ssl/certs/91202.pem"
	I0318 11:15:32.919499    6988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91202.pem
	I0318 11:15:32.926156    6988 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 10:53 /usr/share/ca-certificates/91202.pem
	I0318 11:15:32.938534    6988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91202.pem
	I0318 11:15:32.961726    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/91202.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 11:15:32.992688    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 11:15:33.025552    6988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:15:33.032459    6988 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:15:33.044488    6988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:15:33.065870    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 11:15:33.097051    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9120.pem && ln -fs /usr/share/ca-certificates/9120.pem /etc/ssl/certs/9120.pem"
	I0318 11:15:33.127658    6988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9120.pem
	I0318 11:15:33.136046    6988 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 10:53 /usr/share/ca-certificates/9120.pem
	I0318 11:15:33.147689    6988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9120.pem
	I0318 11:15:33.170767    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9120.pem /etc/ssl/certs/51391683.0"
	I0318 11:15:33.203224    6988 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 11:15:33.208830    6988 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 11:15:33.208830    6988 kubeadm.go:391] StartCluster: {Name:ha-606900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clu
sterName:ha-606900 Namespace:default APIServerHAVIP:172.25.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.148.74 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 11:15:33.218535    6988 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 11:15:33.258468    6988 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0318 11:15:33.290524    6988 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 11:15:33.323280    6988 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 11:15:33.340826    6988 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 11:15:33.340826    6988 kubeadm.go:156] found existing configuration files:
	
	I0318 11:15:33.354030    6988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 11:15:33.370382    6988 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 11:15:33.383195    6988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 11:15:33.411138    6988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 11:15:33.427231    6988 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 11:15:33.439051    6988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 11:15:33.468668    6988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 11:15:33.483589    6988 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 11:15:33.495256    6988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 11:15:33.528264    6988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 11:15:33.545588    6988 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 11:15:33.557341    6988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 11:15:33.575885    6988 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 11:15:34.068559    6988 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 11:15:49.726695    6988 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 11:15:49.726695    6988 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 11:15:49.726695    6988 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 11:15:49.726695    6988 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 11:15:49.727737    6988 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 11:15:49.727737    6988 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 11:15:49.730757    6988 out.go:204]   - Generating certificates and keys ...
	I0318 11:15:49.730757    6988 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 11:15:49.731098    6988 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 11:15:49.731098    6988 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0318 11:15:49.731377    6988 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0318 11:15:49.731496    6988 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0318 11:15:49.731617    6988 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0318 11:15:49.731731    6988 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0318 11:15:49.732058    6988 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-606900 localhost] and IPs [172.25.148.74 127.0.0.1 ::1]
	I0318 11:15:49.732203    6988 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0318 11:15:49.732464    6988 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-606900 localhost] and IPs [172.25.148.74 127.0.0.1 ::1]
	I0318 11:15:49.732464    6988 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0318 11:15:49.732679    6988 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0318 11:15:49.732809    6988 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0318 11:15:49.733056    6988 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 11:15:49.733178    6988 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 11:15:49.733300    6988 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 11:15:49.733434    6988 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 11:15:49.733554    6988 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 11:15:49.733799    6988 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 11:15:49.733886    6988 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 11:15:49.736782    6988 out.go:204]   - Booting up control plane ...
	I0318 11:15:49.737337    6988 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 11:15:49.737598    6988 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 11:15:49.737652    6988 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 11:15:49.737652    6988 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 11:15:49.738540    6988 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 11:15:49.738540    6988 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 11:15:49.738540    6988 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 11:15:49.738540    6988 kubeadm.go:309] [apiclient] All control plane components are healthy after 9.625069 seconds
	I0318 11:15:49.738540    6988 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 11:15:49.738540    6988 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 11:15:49.738540    6988 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 11:15:49.738540    6988 kubeadm.go:309] [mark-control-plane] Marking the node ha-606900 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 11:15:49.738540    6988 kubeadm.go:309] [bootstrap-token] Using token: 36xohw.fzoxxaltg9qrulz1
	I0318 11:15:49.743396    6988 out.go:204]   - Configuring RBAC rules ...
	I0318 11:15:49.743396    6988 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 11:15:49.743396    6988 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 11:15:49.743396    6988 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 11:15:49.744451    6988 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 11:15:49.744451    6988 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 11:15:49.744451    6988 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 11:15:49.745163    6988 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 11:15:49.745163    6988 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 11:15:49.745163    6988 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 11:15:49.745163    6988 kubeadm.go:309] 
	I0318 11:15:49.745163    6988 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 11:15:49.745163    6988 kubeadm.go:309] 
	I0318 11:15:49.745163    6988 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 11:15:49.745163    6988 kubeadm.go:309] 
	I0318 11:15:49.745163    6988 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 11:15:49.745163    6988 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 11:15:49.745163    6988 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 11:15:49.745163    6988 kubeadm.go:309] 
	I0318 11:15:49.746192    6988 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 11:15:49.746192    6988 kubeadm.go:309] 
	I0318 11:15:49.746192    6988 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 11:15:49.746192    6988 kubeadm.go:309] 
	I0318 11:15:49.746192    6988 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 11:15:49.746192    6988 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 11:15:49.746192    6988 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 11:15:49.746192    6988 kubeadm.go:309] 
	I0318 11:15:49.747189    6988 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 11:15:49.747189    6988 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 11:15:49.747189    6988 kubeadm.go:309] 
	I0318 11:15:49.747189    6988 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 36xohw.fzoxxaltg9qrulz1 \
	I0318 11:15:49.747189    6988 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1315b336657f971045d436062c4002c5bfe51c3e72afc075449943f75abc0cef \
	I0318 11:15:49.747189    6988 kubeadm.go:309] 	--control-plane 
	I0318 11:15:49.747189    6988 kubeadm.go:309] 
	I0318 11:15:49.747189    6988 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 11:15:49.748182    6988 kubeadm.go:309] 
	I0318 11:15:49.748182    6988 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 36xohw.fzoxxaltg9qrulz1 \
	I0318 11:15:49.748182    6988 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1315b336657f971045d436062c4002c5bfe51c3e72afc075449943f75abc0cef 
	I0318 11:15:49.748182    6988 cni.go:84] Creating CNI manager for ""
	I0318 11:15:49.748182    6988 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0318 11:15:49.752917    6988 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0318 11:15:49.769298    6988 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0318 11:15:49.777103    6988 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0318 11:15:49.777162    6988 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0318 11:15:49.863177    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0318 11:15:51.610272    6988 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.7470842s)
	I0318 11:15:51.610377    6988 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 11:15:51.624674    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:15:51.625674    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-606900 minikube.k8s.io/updated_at=2024_03_18T11_15_51_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd minikube.k8s.io/name=ha-606900 minikube.k8s.io/primary=true
	I0318 11:15:51.642702    6988 ops.go:34] apiserver oom_adj: -16
	I0318 11:15:51.917390    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:15:52.432422    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:15:52.921651    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:15:53.423594    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:15:53.927648    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:15:54.427276    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:15:54.927780    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:15:55.432091    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:15:55.918441    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:15:56.423358    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:15:56.924624    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:15:57.430536    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:15:57.919816    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:15:58.420603    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:15:58.924829    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:15:59.416909    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:15:59.922866    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:16:00.426554    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:16:00.934218    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:16:01.424216    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:16:01.928826    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:16:02.422204    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:16:02.926125    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:16:03.215851    6988 kubeadm.go:1107] duration metric: took 11.6052726s to wait for elevateKubeSystemPrivileges
	W0318 11:16:03.215965    6988 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 11:16:03.215965    6988 kubeadm.go:393] duration metric: took 30.0069459s to StartCluster
	I0318 11:16:03.215965    6988 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:16:03.215965    6988 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0318 11:16:03.217451    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:16:03.219612    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0318 11:16:03.219612    6988 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.25.148.74 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 11:16:03.219782    6988 start.go:240] waiting for startup goroutines ...
	I0318 11:16:03.219782    6988 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 11:16:03.219930    6988 addons.go:69] Setting storage-provisioner=true in profile "ha-606900"
	I0318 11:16:03.219930    6988 addons.go:234] Setting addon storage-provisioner=true in "ha-606900"
	I0318 11:16:03.219930    6988 addons.go:69] Setting default-storageclass=true in profile "ha-606900"
	I0318 11:16:03.220089    6988 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-606900"
	I0318 11:16:03.220163    6988 host.go:66] Checking if "ha-606900" exists ...
	I0318 11:16:03.220514    6988 config.go:182] Loaded profile config "ha-606900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:16:03.221165    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:16:03.221858    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:16:03.435924    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.25.144.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0318 11:16:04.151307    6988 start.go:948] {"host.minikube.internal": 172.25.144.1} host record injected into CoreDNS's ConfigMap
	I0318 11:16:05.622751    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:16:05.622824    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:05.625797    6988 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 11:16:05.622824    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:16:05.625842    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:05.628075    6988 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 11:16:05.628193    6988 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 11:16:05.628257    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:16:05.629123    6988 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0318 11:16:05.630198    6988 kapi.go:59] client config for ha-606900: &rest.Config{Host:"https://172.25.159.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-606900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-606900\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x226b2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 11:16:05.632140    6988 cert_rotation.go:137] Starting client certificate rotation controller
	I0318 11:16:05.632217    6988 addons.go:234] Setting addon default-storageclass=true in "ha-606900"
	I0318 11:16:05.632217    6988 host.go:66] Checking if "ha-606900" exists ...
	I0318 11:16:05.633560    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:16:08.033179    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:16:08.033239    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:08.033376    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:16:08.033440    6988 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 11:16:08.033494    6988 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 11:16:08.033440    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:08.033622    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:16:08.033622    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:16:10.409117    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:16:10.409117    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:10.409295    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:16:10.902325    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:16:10.902325    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:10.903400    6988 sshutil.go:53] new ssh client: &{IP:172.25.148.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900\id_rsa Username:docker}
	I0318 11:16:11.060410    6988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 11:16:13.179053    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:16:13.180033    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:13.180625    6988 sshutil.go:53] new ssh client: &{IP:172.25.148.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900\id_rsa Username:docker}
	I0318 11:16:13.332329    6988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 11:16:13.636035    6988 round_trippers.go:463] GET https://172.25.159.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0318 11:16:13.636106    6988 round_trippers.go:469] Request Headers:
	I0318 11:16:13.636106    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:16:13.636106    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:16:13.651451    6988 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0318 11:16:13.653120    6988 round_trippers.go:463] PUT https://172.25.159.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0318 11:16:13.653189    6988 round_trippers.go:469] Request Headers:
	I0318 11:16:13.653189    6988 round_trippers.go:473]     Content-Type: application/json
	I0318 11:16:13.653189    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:16:13.653189    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:16:13.658538    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:16:13.663668    6988 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0318 11:16:13.665252    6988 addons.go:505] duration metric: took 10.445404s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0318 11:16:13.665795    6988 start.go:245] waiting for cluster config update ...
	I0318 11:16:13.665854    6988 start.go:254] writing updated cluster config ...
	I0318 11:16:13.668278    6988 out.go:177] 
	I0318 11:16:13.680992    6988 config.go:182] Loaded profile config "ha-606900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:16:13.680992    6988 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\config.json ...
	I0318 11:16:13.686997    6988 out.go:177] * Starting "ha-606900-m02" control-plane node in "ha-606900" cluster
	I0318 11:16:13.691983    6988 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 11:16:13.691983    6988 cache.go:56] Caching tarball of preloaded images
	I0318 11:16:13.692511    6988 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0318 11:16:13.692870    6988 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 11:16:13.693020    6988 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\config.json ...
	I0318 11:16:13.695637    6988 start.go:360] acquireMachinesLock for ha-606900-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 11:16:13.695741    6988 start.go:364] duration metric: took 75.6µs to acquireMachinesLock for "ha-606900-m02"
	I0318 11:16:13.696049    6988 start.go:93] Provisioning new machine with config: &{Name:ha-606900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.28.4 ClusterName:ha-606900 Namespace:default APIServerHAVIP:172.25.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.148.74 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 11:16:13.696297    6988 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0318 11:16:13.704521    6988 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 11:16:13.704521    6988 start.go:159] libmachine.API.Create for "ha-606900" (driver="hyperv")
	I0318 11:16:13.704521    6988 client.go:168] LocalClient.Create starting
	I0318 11:16:13.705500    6988 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0318 11:16:13.705784    6988 main.go:141] libmachine: Decoding PEM data...
	I0318 11:16:13.705899    6988 main.go:141] libmachine: Parsing certificate...
	I0318 11:16:13.705958    6988 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0318 11:16:13.706150    6988 main.go:141] libmachine: Decoding PEM data...
	I0318 11:16:13.706150    6988 main.go:141] libmachine: Parsing certificate...
	I0318 11:16:13.706150    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0318 11:16:15.734669    6988 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0318 11:16:15.735241    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:15.735241    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0318 11:16:17.599830    6988 main.go:141] libmachine: [stdout =====>] : False
	
	I0318 11:16:17.599933    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:17.599933    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0318 11:16:19.141929    6988 main.go:141] libmachine: [stdout =====>] : True
	
	I0318 11:16:19.141929    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:19.142081    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0318 11:16:22.814700    6988 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0318 11:16:22.815207    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:22.817521    6988 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0318 11:16:23.292263    6988 main.go:141] libmachine: Creating SSH key...
	I0318 11:16:23.676906    6988 main.go:141] libmachine: Creating VM...
	I0318 11:16:23.676906    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0318 11:16:26.676845    6988 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0318 11:16:26.676845    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:26.677103    6988 main.go:141] libmachine: Using switch "Default Switch"
	I0318 11:16:26.677103    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0318 11:16:28.582824    6988 main.go:141] libmachine: [stdout =====>] : True
	
	I0318 11:16:28.582904    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:28.582904    6988 main.go:141] libmachine: Creating VHD
	I0318 11:16:28.582983    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0318 11:16:32.465012    6988 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 94A352C9-CBC6-4F8B-B8FF-75EA329F7583
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0318 11:16:32.465310    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:32.465310    6988 main.go:141] libmachine: Writing magic tar header
	I0318 11:16:32.465415    6988 main.go:141] libmachine: Writing SSH key tar header
	I0318 11:16:32.474675    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0318 11:16:35.736119    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:16:35.736723    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:35.736949    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m02\disk.vhd' -SizeBytes 20000MB
	I0318 11:16:38.352859    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:16:38.352932    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:38.353004    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-606900-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0318 11:16:42.115273    6988 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-606900-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0318 11:16:42.115714    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:42.115714    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-606900-m02 -DynamicMemoryEnabled $false
	I0318 11:16:44.482472    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:16:44.482687    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:44.482687    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-606900-m02 -Count 2
	I0318 11:16:46.739235    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:16:46.739235    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:46.739377    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-606900-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m02\boot2docker.iso'
	I0318 11:16:49.360076    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:16:49.360076    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:49.360076    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-606900-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m02\disk.vhd'
	I0318 11:16:52.091800    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:16:52.091800    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:52.091800    6988 main.go:141] libmachine: Starting VM...
	I0318 11:16:52.092051    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-606900-m02
	I0318 11:16:55.266231    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:16:55.266231    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:55.266327    6988 main.go:141] libmachine: Waiting for host to start...
	I0318 11:16:55.266327    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m02 ).state
	I0318 11:16:57.615984    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:16:57.615984    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:57.616059    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:17:00.236468    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:17:00.236468    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:01.249176    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m02 ).state
	I0318 11:17:03.508737    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:17:03.508764    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:03.508831    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:17:06.157366    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:17:06.157366    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:07.163178    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m02 ).state
	I0318 11:17:09.431219    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:17:09.431219    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:09.431219    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:17:12.040578    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:17:12.041680    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:13.057699    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m02 ).state
	I0318 11:17:15.368804    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:17:15.369739    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:15.369739    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:17:18.025466    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:17:18.025466    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:19.029802    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m02 ).state
	I0318 11:17:21.284031    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:17:21.284031    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:21.284728    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:17:23.944534    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.106
	
	I0318 11:17:23.945551    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:23.945644    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m02 ).state
	I0318 11:17:26.155981    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:17:26.155981    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:26.156320    6988 machine.go:94] provisionDockerMachine start ...
	I0318 11:17:26.156457    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m02 ).state
	I0318 11:17:28.429950    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:17:28.429950    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:28.429950    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:17:31.039834    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.106
	
	I0318 11:17:31.039834    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:31.046290    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:17:31.047039    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.106 22 <nil> <nil>}
	I0318 11:17:31.047039    6988 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 11:17:31.171897    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 11:17:31.171897    6988 buildroot.go:166] provisioning hostname "ha-606900-m02"
	I0318 11:17:31.171897    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m02 ).state
	I0318 11:17:33.382698    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:17:33.382698    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:33.382698    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:17:36.023191    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.106
	
	I0318 11:17:36.023191    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:36.030310    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:17:36.030440    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.106 22 <nil> <nil>}
	I0318 11:17:36.030440    6988 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-606900-m02 && echo "ha-606900-m02" | sudo tee /etc/hostname
	I0318 11:17:36.194759    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-606900-m02
	
	I0318 11:17:36.194759    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m02 ).state
	I0318 11:17:38.410375    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:17:38.410375    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:38.410375    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:17:41.045337    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.106
	
	I0318 11:17:41.045337    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:41.052388    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:17:41.052388    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.106 22 <nil> <nil>}
	I0318 11:17:41.052932    6988 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-606900-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-606900-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-606900-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 11:17:41.195318    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 11:17:41.195379    6988 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0318 11:17:41.195479    6988 buildroot.go:174] setting up certificates
	I0318 11:17:41.195537    6988 provision.go:84] configureAuth start
	I0318 11:17:41.195586    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m02 ).state
	I0318 11:17:43.421138    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:17:43.421138    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:43.421484    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:17:46.097212    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.106
	
	I0318 11:17:46.097212    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:46.098241    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m02 ).state
	I0318 11:17:48.346410    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:17:48.346410    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:48.347146    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:17:50.989911    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.106
	
	I0318 11:17:50.990537    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:50.990537    6988 provision.go:143] copyHostCerts
	I0318 11:17:50.990693    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0318 11:17:50.990923    6988 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0318 11:17:50.990923    6988 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0318 11:17:50.991375    6988 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0318 11:17:50.992559    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0318 11:17:50.992870    6988 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0318 11:17:50.992870    6988 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0318 11:17:50.993181    6988 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0318 11:17:50.994276    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0318 11:17:50.994498    6988 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0318 11:17:50.994498    6988 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0318 11:17:50.994917    6988 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0318 11:17:50.995895    6988 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-606900-m02 san=[127.0.0.1 172.25.148.106 ha-606900-m02 localhost minikube]
	I0318 11:17:51.120589    6988 provision.go:177] copyRemoteCerts
	I0318 11:17:51.135575    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 11:17:51.135575    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m02 ).state
	I0318 11:17:53.324184    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:17:53.324184    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:53.324753    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:17:55.912764    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.106
	
	I0318 11:17:55.912764    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:55.913888    6988 sshutil.go:53] new ssh client: &{IP:172.25.148.106 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m02\id_rsa Username:docker}
	I0318 11:17:56.021152    6988 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8855469s)
	I0318 11:17:56.021152    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0318 11:17:56.022139    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 11:17:56.076377    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0318 11:17:56.077382    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0318 11:17:56.126875    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0318 11:17:56.127326    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 11:17:56.179437    6988 provision.go:87] duration metric: took 14.9838054s to configureAuth
	I0318 11:17:56.179475    6988 buildroot.go:189] setting minikube options for container-runtime
	I0318 11:17:56.179973    6988 config.go:182] Loaded profile config "ha-606900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:17:56.180132    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m02 ).state
	I0318 11:17:58.338047    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:17:58.339028    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:58.339028    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:18:00.950913    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.106
	
	I0318 11:18:00.951625    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:18:00.956862    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:18:00.957182    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.106 22 <nil> <nil>}
	I0318 11:18:00.957182    6988 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0318 11:18:01.082835    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0318 11:18:01.082835    6988 buildroot.go:70] root file system type: tmpfs
	I0318 11:18:01.082835    6988 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0318 11:18:01.082835    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m02 ).state
	I0318 11:18:03.273278    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:18:03.274182    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:18:03.274182    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:18:05.926915    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.106
	
	I0318 11:18:05.927213    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:18:05.932751    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:18:05.933426    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.106 22 <nil> <nil>}
	I0318 11:18:05.933426    6988 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.25.148.74"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0318 11:18:06.084370    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.25.148.74
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0318 11:18:06.084370    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m02 ).state
	I0318 11:18:08.263097    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:18:08.263275    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:18:08.263361    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:18:10.867448    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.106
	
	I0318 11:18:10.867448    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:18:10.874852    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:18:10.874852    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.106 22 <nil> <nil>}
	I0318 11:18:10.874852    6988 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0318 11:18:13.053268    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0318 11:18:13.053344    6988 machine.go:97] duration metric: took 46.896728s to provisionDockerMachine
	I0318 11:18:13.053398    6988 client.go:171] duration metric: took 1m59.3475434s to LocalClient.Create
	I0318 11:18:13.053435    6988 start.go:167] duration metric: took 1m59.3481627s to libmachine.API.Create "ha-606900"
	I0318 11:18:13.053435    6988 start.go:293] postStartSetup for "ha-606900-m02" (driver="hyperv")
	I0318 11:18:13.053506    6988 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 11:18:13.066051    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 11:18:13.067095    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m02 ).state
	I0318 11:18:15.266461    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:18:15.266461    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:18:15.266461    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:18:17.923989    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.106
	
	I0318 11:18:17.923989    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:18:17.924644    6988 sshutil.go:53] new ssh client: &{IP:172.25.148.106 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m02\id_rsa Username:docker}
	I0318 11:18:18.028046    6988 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9619632s)
	I0318 11:18:18.043354    6988 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 11:18:18.050399    6988 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 11:18:18.050458    6988 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0318 11:18:18.050620    6988 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0318 11:18:18.051737    6988 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem -> 91202.pem in /etc/ssl/certs
	I0318 11:18:18.051737    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem -> /etc/ssl/certs/91202.pem
	I0318 11:18:18.065814    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 11:18:18.084375    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem --> /etc/ssl/certs/91202.pem (1708 bytes)
	I0318 11:18:18.134545    6988 start.go:296] duration metric: took 5.0810774s for postStartSetup
	I0318 11:18:18.138696    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m02 ).state
	I0318 11:18:20.369387    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:18:20.369586    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:18:20.369586    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:18:22.964920    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.106
	
	I0318 11:18:22.964920    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:18:22.965825    6988 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\config.json ...
	I0318 11:18:22.969150    6988 start.go:128] duration metric: took 2m9.2720381s to createHost
	I0318 11:18:22.969472    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m02 ).state
	I0318 11:18:25.170668    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:18:25.170668    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:18:25.170668    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:18:27.838464    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.106
	
	I0318 11:18:27.838464    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:18:27.843839    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:18:27.844381    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.106 22 <nil> <nil>}
	I0318 11:18:27.844586    6988 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 11:18:27.974384    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710760707.958099644
	
	I0318 11:18:27.974384    6988 fix.go:216] guest clock: 1710760707.958099644
	I0318 11:18:27.974384    6988 fix.go:229] Guest: 2024-03-18 11:18:27.958099644 +0000 UTC Remote: 2024-03-18 11:18:22.9691501 +0000 UTC m=+347.321305401 (delta=4.988949544s)
	I0318 11:18:27.974384    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m02 ).state
	I0318 11:18:30.179075    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:18:30.179652    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:18:30.179652    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:18:32.785204    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.106
	
	I0318 11:18:32.785204    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:18:32.791008    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:18:32.791684    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.106 22 <nil> <nil>}
	I0318 11:18:32.791684    6988 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1710760707
	I0318 11:18:32.926747    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Mar 18 11:18:27 UTC 2024
	
	I0318 11:18:32.926818    6988 fix.go:236] clock set: Mon Mar 18 11:18:27 UTC 2024
	 (err=<nil>)
	I0318 11:18:32.926818    6988 start.go:83] releasing machines lock for "ha-606900-m02", held for 2m19.2301296s
	I0318 11:18:32.927115    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m02 ).state
	I0318 11:18:35.100434    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:18:35.101267    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:18:35.101267    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:18:37.763215    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.106
	
	I0318 11:18:37.763215    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:18:37.766012    6988 out.go:177] * Found network options:
	I0318 11:18:37.768781    6988 out.go:177]   - NO_PROXY=172.25.148.74
	W0318 11:18:37.771834    6988 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 11:18:37.775295    6988 out.go:177]   - NO_PROXY=172.25.148.74
	W0318 11:18:37.778570    6988 proxy.go:119] fail to check proxy env: Error ip not in block
	W0318 11:18:37.779944    6988 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 11:18:37.783608    6988 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 11:18:37.783739    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m02 ).state
	I0318 11:18:37.795320    6988 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0318 11:18:37.795320    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m02 ).state
	I0318 11:18:40.069228    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:18:40.069228    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:18:40.069335    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:18:40.078448    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:18:40.078448    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:18:40.078448    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:18:42.807693    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.106
	
	I0318 11:18:42.807882    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:18:42.808463    6988 sshutil.go:53] new ssh client: &{IP:172.25.148.106 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m02\id_rsa Username:docker}
	I0318 11:18:42.832137    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.106
	
	I0318 11:18:42.832776    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:18:42.833185    6988 sshutil.go:53] new ssh client: &{IP:172.25.148.106 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m02\id_rsa Username:docker}
	I0318 11:18:42.995256    6988 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1999034s)
	I0318 11:18:42.995256    6988 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2116157s)
	W0318 11:18:42.995256    6988 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 11:18:43.008592    6988 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 11:18:43.039208    6988 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 11:18:43.039208    6988 start.go:494] detecting cgroup driver to use...
	I0318 11:18:43.039208    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 11:18:43.088819    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0318 11:18:43.125725    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0318 11:18:43.145356    6988 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0318 11:18:43.157802    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0318 11:18:43.195262    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 11:18:43.226901    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0318 11:18:43.265266    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 11:18:43.296209    6988 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 11:18:43.327219    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0318 11:18:43.359989    6988 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 11:18:43.392395    6988 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 11:18:43.422385    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:18:43.640591    6988 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0318 11:18:43.676274    6988 start.go:494] detecting cgroup driver to use...
	I0318 11:18:43.686572    6988 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0318 11:18:43.726055    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 11:18:43.762601    6988 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 11:18:43.812441    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 11:18:43.849220    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 11:18:43.888333    6988 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0318 11:18:43.955295    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 11:18:43.980270    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 11:18:44.032307    6988 ssh_runner.go:195] Run: which cri-dockerd
	I0318 11:18:44.049882    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0318 11:18:44.071945    6988 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0318 11:18:44.121312    6988 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0318 11:18:44.339527    6988 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0318 11:18:44.543976    6988 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0318 11:18:44.544044    6988 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0318 11:18:44.592143    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:18:44.804315    6988 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 11:18:47.357918    6988 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5525709s)
	I0318 11:18:47.370766    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0318 11:18:47.411486    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 11:18:47.449152    6988 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0318 11:18:47.669084    6988 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0318 11:18:47.876260    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:18:48.095103    6988 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0318 11:18:48.135862    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 11:18:48.173199    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:18:48.378858    6988 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0318 11:18:48.487730    6988 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0318 11:18:48.501259    6988 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0318 11:18:48.512010    6988 start.go:562] Will wait 60s for crictl version
	I0318 11:18:48.523923    6988 ssh_runner.go:195] Run: which crictl
	I0318 11:18:48.542717    6988 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 11:18:48.618593    6988 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.4
	RuntimeApiVersion:  v1
	I0318 11:18:48.628679    6988 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 11:18:48.676020    6988 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 11:18:48.716171    6988 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	I0318 11:18:48.719341    6988 out.go:177]   - env NO_PROXY=172.25.148.74
	I0318 11:18:48.726055    6988 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0318 11:18:48.730688    6988 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0318 11:18:48.730688    6988 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0318 11:18:48.730688    6988 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0318 11:18:48.730688    6988 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:ae:0d:2c Flags:up|broadcast|multicast|running}
	I0318 11:18:48.733137    6988 ip.go:210] interface addr: fe80::f8a6:d6b6:cc4:1ba0/64
	I0318 11:18:48.733137    6988 ip.go:210] interface addr: 172.25.144.1/20
	I0318 11:18:48.746157    6988 ssh_runner.go:195] Run: grep 172.25.144.1	host.minikube.internal$ /etc/hosts
	I0318 11:18:48.753711    6988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 11:18:48.776669    6988 mustload.go:65] Loading cluster: ha-606900
	I0318 11:18:48.777817    6988 config.go:182] Loaded profile config "ha-606900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:18:48.778528    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:18:50.955932    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:18:50.956379    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:18:50.956420    6988 host.go:66] Checking if "ha-606900" exists ...
	I0318 11:18:50.956675    6988 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900 for IP: 172.25.148.106
	I0318 11:18:50.956675    6988 certs.go:194] generating shared ca certs ...
	I0318 11:18:50.956675    6988 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:18:50.957756    6988 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0318 11:18:50.958266    6988 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0318 11:18:50.958376    6988 certs.go:256] generating profile certs ...
	I0318 11:18:50.959398    6988 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\client.key
	I0318 11:18:50.959733    6988 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.key.2a060aa0
	I0318 11:18:50.960133    6988 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.crt.2a060aa0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.148.74 172.25.148.106 172.25.159.254]
	I0318 11:18:51.197801    6988 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.crt.2a060aa0 ...
	I0318 11:18:51.197801    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.crt.2a060aa0: {Name:mk9d7ae9ba5a8c0b27ce142ce3b747943c789334 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:18:51.199611    6988 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.key.2a060aa0 ...
	I0318 11:18:51.199611    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.key.2a060aa0: {Name:mk24deb8d747f6290c3ccba9faa6f7a8d3fb3ba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:18:51.200552    6988 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.crt.2a060aa0 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.crt
	I0318 11:18:51.212892    6988 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.key.2a060aa0 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.key
	I0318 11:18:51.213842    6988 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\proxy-client.key
	I0318 11:18:51.214833    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 11:18:51.214974    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0318 11:18:51.215200    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 11:18:51.215399    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 11:18:51.215605    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 11:18:51.215769    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 11:18:51.215949    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 11:18:51.216102    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 11:18:51.216279    6988 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9120.pem (1338 bytes)
	W0318 11:18:51.216279    6988 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9120_empty.pem, impossibly tiny 0 bytes
	I0318 11:18:51.216891    6988 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0318 11:18:51.217144    6988 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0318 11:18:51.217368    6988 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0318 11:18:51.217368    6988 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0318 11:18:51.217368    6988 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem (1708 bytes)
	I0318 11:18:51.217368    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9120.pem -> /usr/share/ca-certificates/9120.pem
	I0318 11:18:51.217368    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem -> /usr/share/ca-certificates/91202.pem
	I0318 11:18:51.217368    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:18:51.217368    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:18:53.387606    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:18:53.387606    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:18:53.388037    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:18:56.015490    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:18:56.016542    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:18:56.016760    6988 sshutil.go:53] new ssh client: &{IP:172.25.148.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900\id_rsa Username:docker}
	I0318 11:18:56.114235    6988 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0318 11:18:56.121995    6988 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0318 11:18:56.155759    6988 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0318 11:18:56.163678    6988 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0318 11:18:56.200671    6988 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0318 11:18:56.208895    6988 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0318 11:18:56.244780    6988 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0318 11:18:56.252608    6988 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0318 11:18:56.286707    6988 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0318 11:18:56.293568    6988 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0318 11:18:56.342878    6988 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0318 11:18:56.350781    6988 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0318 11:18:56.371657    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 11:18:56.427795    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 11:18:56.486459    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 11:18:56.534836    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 11:18:56.585359    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0318 11:18:56.630393    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 11:18:56.682727    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 11:18:56.731758    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 11:18:56.779679    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9120.pem --> /usr/share/ca-certificates/9120.pem (1338 bytes)
	I0318 11:18:56.831849    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem --> /usr/share/ca-certificates/91202.pem (1708 bytes)
	I0318 11:18:56.882835    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 11:18:56.933127    6988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0318 11:18:56.966877    6988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0318 11:18:57.003068    6988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0318 11:18:57.038971    6988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0318 11:18:57.074346    6988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0318 11:18:57.110027    6988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0318 11:18:57.144910    6988 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0318 11:18:57.203717    6988 ssh_runner.go:195] Run: openssl version
	I0318 11:18:57.228299    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 11:18:57.260581    6988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:18:57.268119    6988 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:18:57.280738    6988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:18:57.303370    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 11:18:57.337414    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9120.pem && ln -fs /usr/share/ca-certificates/9120.pem /etc/ssl/certs/9120.pem"
	I0318 11:18:57.372167    6988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9120.pem
	I0318 11:18:57.379760    6988 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 10:53 /usr/share/ca-certificates/9120.pem
	I0318 11:18:57.393047    6988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9120.pem
	I0318 11:18:57.415692    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9120.pem /etc/ssl/certs/51391683.0"
	I0318 11:18:57.454750    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/91202.pem && ln -fs /usr/share/ca-certificates/91202.pem /etc/ssl/certs/91202.pem"
	I0318 11:18:57.488390    6988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91202.pem
	I0318 11:18:57.496913    6988 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 10:53 /usr/share/ca-certificates/91202.pem
	I0318 11:18:57.519801    6988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91202.pem
	I0318 11:18:57.543042    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/91202.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 11:18:57.584527    6988 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 11:18:57.591610    6988 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 11:18:57.591974    6988 kubeadm.go:928] updating node {m02 172.25.148.106 8443 v1.28.4 docker true true} ...
	I0318 11:18:57.591974    6988 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-606900-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.148.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-606900 Namespace:default APIServerHAVIP:172.25.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 11:18:57.591974    6988 kube-vip.go:111] generating kube-vip config ...
	I0318 11:18:57.604686    6988 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0318 11:18:57.634413    6988 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0318 11:18:57.634413    6988 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.25.159.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0318 11:18:57.646376    6988 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 11:18:57.671715    6988 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0318 11:18:57.683394    6988 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0318 11:18:57.706582    6988 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl
	I0318 11:18:57.706644    6988 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm
	I0318 11:18:57.706732    6988 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet
	I0318 11:18:58.653946    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0318 11:18:58.665920    6988 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0318 11:18:58.673924    6988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0318 11:18:58.673924    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0318 11:19:01.594637    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0318 11:19:01.611573    6988 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0318 11:19:01.624199    6988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0318 11:19:01.624628    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0318 11:19:05.617433    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 11:19:05.648435    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0318 11:19:05.659765    6988 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0318 11:19:05.668035    6988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0318 11:19:05.668265    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0318 11:19:06.362735    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0318 11:19:06.382756    6988 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0318 11:19:06.425562    6988 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 11:19:06.464896    6988 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0318 11:19:06.514097    6988 ssh_runner.go:195] Run: grep 172.25.159.254	control-plane.minikube.internal$ /etc/hosts
	I0318 11:19:06.519890    6988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.159.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 11:19:06.560914    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:19:06.795370    6988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 11:19:06.833813    6988 host.go:66] Checking if "ha-606900" exists ...
	I0318 11:19:06.834439    6988 start.go:316] joinCluster: &{Name:ha-606900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-606900 Namespace:default APIServerHAVIP:172.25.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.148.74 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.148.106 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 11:19:06.834439    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0318 11:19:06.834439    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:19:09.001061    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:19:09.001731    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:19:09.001994    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:19:11.676669    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:19:11.676669    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:19:11.677265    6988 sshutil.go:53] new ssh client: &{IP:172.25.148.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900\id_rsa Username:docker}
	I0318 11:19:12.030333    6988 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (5.1958608s)
	I0318 11:19:12.030522    6988 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.25.148.106 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 11:19:12.030576    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0x9lic.e93j8sfuv95zwn47 --discovery-token-ca-cert-hash sha256:1315b336657f971045d436062c4002c5bfe51c3e72afc075449943f75abc0cef --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-606900-m02 --control-plane --apiserver-advertise-address=172.25.148.106 --apiserver-bind-port=8443"
	I0318 11:20:13.769317    6988 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0x9lic.e93j8sfuv95zwn47 --discovery-token-ca-cert-hash sha256:1315b336657f971045d436062c4002c5bfe51c3e72afc075449943f75abc0cef --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-606900-m02 --control-plane --apiserver-advertise-address=172.25.148.106 --apiserver-bind-port=8443": (1m1.7382852s)
	I0318 11:20:13.769453    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0318 11:20:14.478554    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-606900-m02 minikube.k8s.io/updated_at=2024_03_18T11_20_14_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd minikube.k8s.io/name=ha-606900 minikube.k8s.io/primary=false
	I0318 11:20:14.680569    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-606900-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0318 11:20:14.856695    6988 start.go:318] duration metric: took 1m8.0218267s to joinCluster
	I0318 11:20:14.857680    6988 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.25.148.106 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 11:20:14.859688    6988 out.go:177] * Verifying Kubernetes components...
	I0318 11:20:14.858668    6988 config.go:182] Loaded profile config "ha-606900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:20:14.876681    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:20:15.349006    6988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 11:20:15.377688    6988 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0318 11:20:15.378453    6988 kapi.go:59] client config for ha-606900: &rest.Config{Host:"https://172.25.159.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-606900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-606900\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x226b2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0318 11:20:15.378640    6988 kubeadm.go:477] Overriding stale ClientConfig host https://172.25.159.254:8443 with https://172.25.148.74:8443
	I0318 11:20:15.379440    6988 node_ready.go:35] waiting up to 6m0s for node "ha-606900-m02" to be "Ready" ...
	I0318 11:20:15.379678    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:15.379762    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:15.379788    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:15.379788    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:15.395437    6988 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0318 11:20:15.882575    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:15.882575    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:15.882575    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:15.882575    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:15.889596    6988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 11:20:16.387165    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:16.387232    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:16.387232    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:16.387293    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:16.394827    6988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 11:20:16.893984    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:16.893984    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:16.893984    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:16.893984    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:16.898982    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:20:17.387108    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:17.387108    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:17.387108    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:17.387108    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:17.392218    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:20:17.392923    6988 node_ready.go:53] node "ha-606900-m02" has status "Ready":"False"
	I0318 11:20:17.895715    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:17.895786    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:17.895786    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:17.895786    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:17.902216    6988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 11:20:18.388771    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:18.388771    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:18.388771    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:18.388771    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:18.395072    6988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 11:20:18.880438    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:18.880438    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:18.880438    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:18.880438    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:18.888017    6988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 11:20:19.390780    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:19.391146    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:19.391146    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:19.391146    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:19.416551    6988 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I0318 11:20:19.417623    6988 node_ready.go:53] node "ha-606900-m02" has status "Ready":"False"
	I0318 11:20:19.881328    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:19.881328    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:19.881328    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:19.881328    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:19.887357    6988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 11:20:20.391926    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:20.391926    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:20.391926    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:20.391926    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:20.397921    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:20:20.884821    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:20.884821    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:20.885109    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:20.885109    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:20.893814    6988 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 11:20:21.391402    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:21.391402    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:21.391402    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:21.391402    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:21.407036    6988 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0318 11:20:21.880809    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:21.880809    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:21.880809    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:21.880809    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:21.886700    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:20:21.887444    6988 node_ready.go:53] node "ha-606900-m02" has status "Ready":"False"
	I0318 11:20:22.385448    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:22.385448    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:22.385766    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:22.385766    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:22.390544    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:20:22.894371    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:22.894371    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:22.894456    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:22.894456    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:22.899898    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:20:23.386476    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:23.386614    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:23.386614    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:23.386643    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:23.391253    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:20:23.889873    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:23.889873    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:23.889873    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:23.889873    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:23.895254    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:20:23.896284    6988 node_ready.go:49] node "ha-606900-m02" has status "Ready":"True"
	I0318 11:20:23.896358    6988 node_ready.go:38] duration metric: took 8.5168075s for node "ha-606900-m02" to be "Ready" ...
	I0318 11:20:23.896358    6988 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 11:20:23.896498    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods
	I0318 11:20:23.896498    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:23.896593    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:23.896736    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:23.904321    6988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 11:20:23.916702    6988 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-jsf9x" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:23.916702    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-jsf9x
	I0318 11:20:23.916702    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:23.916702    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:23.916702    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:23.920846    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:20:23.922462    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900
	I0318 11:20:23.922462    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:23.922518    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:23.922518    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:23.926562    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:20:23.928001    6988 pod_ready.go:92] pod "coredns-5dd5756b68-jsf9x" in "kube-system" namespace has status "Ready":"True"
	I0318 11:20:23.928061    6988 pod_ready.go:81] duration metric: took 11.359ms for pod "coredns-5dd5756b68-jsf9x" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:23.928061    6988 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-wvh6v" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:23.928259    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wvh6v
	I0318 11:20:23.928321    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:23.928321    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:23.928321    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:23.932012    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 11:20:23.933505    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900
	I0318 11:20:23.933505    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:23.933505    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:23.933505    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:23.937785    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:20:23.938321    6988 pod_ready.go:92] pod "coredns-5dd5756b68-wvh6v" in "kube-system" namespace has status "Ready":"True"
	I0318 11:20:23.938321    6988 pod_ready.go:81] duration metric: took 10.2599ms for pod "coredns-5dd5756b68-wvh6v" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:23.938321    6988 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-606900" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:23.938321    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-606900
	I0318 11:20:23.938321    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:23.938321    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:23.938321    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:23.943028    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:20:23.944363    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900
	I0318 11:20:23.944485    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:23.944485    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:23.944485    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:23.947776    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 11:20:23.949742    6988 pod_ready.go:92] pod "etcd-ha-606900" in "kube-system" namespace has status "Ready":"True"
	I0318 11:20:23.949742    6988 pod_ready.go:81] duration metric: took 11.4211ms for pod "etcd-ha-606900" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:23.949742    6988 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-606900-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:23.949925    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-606900-m02
	I0318 11:20:23.950000    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:23.950000    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:23.950000    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:23.953933    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 11:20:23.954614    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:23.954675    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:23.954675    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:23.954675    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:23.958273    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 11:20:23.959704    6988 pod_ready.go:92] pod "etcd-ha-606900-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 11:20:23.959704    6988 pod_ready.go:81] duration metric: took 9.9625ms for pod "etcd-ha-606900-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:23.959704    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-606900" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:24.091948    6988 request.go:629] Waited for 132.1648ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-606900
	I0318 11:20:24.092368    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-606900
	I0318 11:20:24.092368    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:24.092368    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:24.092368    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:24.097403    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:20:24.295753    6988 request.go:629] Waited for 196.7343ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900
	I0318 11:20:24.295962    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900
	I0318 11:20:24.296045    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:24.296045    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:24.296085    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:24.302389    6988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 11:20:24.304040    6988 pod_ready.go:92] pod "kube-apiserver-ha-606900" in "kube-system" namespace has status "Ready":"True"
	I0318 11:20:24.304040    6988 pod_ready.go:81] duration metric: took 344.3336ms for pod "kube-apiserver-ha-606900" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:24.304040    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-606900-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:24.498774    6988 request.go:629] Waited for 194.7324ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-606900-m02
	I0318 11:20:24.498942    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-606900-m02
	I0318 11:20:24.498942    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:24.498942    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:24.499054    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:24.505373    6988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 11:20:24.702752    6988 request.go:629] Waited for 196.6114ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:24.703037    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:24.703037    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:24.703037    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:24.703135    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:24.708858    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:20:24.709669    6988 pod_ready.go:92] pod "kube-apiserver-ha-606900-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 11:20:24.709669    6988 pod_ready.go:81] duration metric: took 405.626ms for pod "kube-apiserver-ha-606900-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:24.709669    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-606900" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:24.890744    6988 request.go:629] Waited for 180.9519ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-606900
	I0318 11:20:24.891017    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-606900
	I0318 11:20:24.891167    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:24.891207    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:24.891207    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:24.897057    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:20:25.095748    6988 request.go:629] Waited for 197.1813ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900
	I0318 11:20:25.095968    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900
	I0318 11:20:25.095968    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:25.095968    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:25.096032    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:25.101033    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:20:25.101939    6988 pod_ready.go:92] pod "kube-controller-manager-ha-606900" in "kube-system" namespace has status "Ready":"True"
	I0318 11:20:25.101939    6988 pod_ready.go:81] duration metric: took 392.2683ms for pod "kube-controller-manager-ha-606900" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:25.101939    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-606900-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:25.298400    6988 request.go:629] Waited for 196.4596ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-606900-m02
	I0318 11:20:25.298992    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-606900-m02
	I0318 11:20:25.298992    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:25.298992    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:25.298992    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:25.309741    6988 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0318 11:20:25.500693    6988 request.go:629] Waited for 189.227ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:25.501327    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:25.501373    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:25.501373    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:25.501373    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:25.508972    6988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 11:20:25.509708    6988 pod_ready.go:92] pod "kube-controller-manager-ha-606900-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 11:20:25.509708    6988 pod_ready.go:81] duration metric: took 407.766ms for pod "kube-controller-manager-ha-606900-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:25.509708    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fk4wg" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:25.704197    6988 request.go:629] Waited for 194.2953ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fk4wg
	I0318 11:20:25.704457    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fk4wg
	I0318 11:20:25.704534    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:25.704534    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:25.704534    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:25.709908    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:20:25.890618    6988 request.go:629] Waited for 178.9881ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900
	I0318 11:20:25.890787    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900
	I0318 11:20:25.890787    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:25.890787    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:25.890850    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:25.898251    6988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 11:20:25.898875    6988 pod_ready.go:92] pod "kube-proxy-fk4wg" in "kube-system" namespace has status "Ready":"True"
	I0318 11:20:25.898875    6988 pod_ready.go:81] duration metric: took 389.1645ms for pod "kube-proxy-fk4wg" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:25.898875    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s9lzf" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:26.094968    6988 request.go:629] Waited for 196.0917ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s9lzf
	I0318 11:20:26.095254    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s9lzf
	I0318 11:20:26.095329    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:26.095363    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:26.095363    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:26.100953    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:20:26.296438    6988 request.go:629] Waited for 194.0859ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:26.296929    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:26.296929    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:26.296929    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:26.296929    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:26.301985    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:20:26.303830    6988 pod_ready.go:92] pod "kube-proxy-s9lzf" in "kube-system" namespace has status "Ready":"True"
	I0318 11:20:26.303830    6988 pod_ready.go:81] duration metric: took 404.9527ms for pod "kube-proxy-s9lzf" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:26.303830    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-606900" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:26.503015    6988 request.go:629] Waited for 199.0789ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-606900
	I0318 11:20:26.503503    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-606900
	I0318 11:20:26.503503    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:26.503503    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:26.503503    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:26.510540    6988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 11:20:26.692698    6988 request.go:629] Waited for 181.092ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900
	I0318 11:20:26.693179    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900
	I0318 11:20:26.693240    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:26.693240    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:26.693240    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:26.715483    6988 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0318 11:20:26.717373    6988 pod_ready.go:92] pod "kube-scheduler-ha-606900" in "kube-system" namespace has status "Ready":"True"
	I0318 11:20:26.717432    6988 pod_ready.go:81] duration metric: took 413.5988ms for pod "kube-scheduler-ha-606900" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:26.717432    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-606900-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:26.897834    6988 request.go:629] Waited for 180.0383ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-606900-m02
	I0318 11:20:26.898028    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-606900-m02
	I0318 11:20:26.898028    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:26.898028    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:26.898174    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:26.906801    6988 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 11:20:27.101664    6988 request.go:629] Waited for 193.6185ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:27.101838    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:27.101838    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:27.101838    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:27.101838    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:27.110484    6988 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 11:20:27.112089    6988 pod_ready.go:92] pod "kube-scheduler-ha-606900-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 11:20:27.112089    6988 pod_ready.go:81] duration metric: took 394.591ms for pod "kube-scheduler-ha-606900-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:27.112089    6988 pod_ready.go:38] duration metric: took 3.2157104s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 11:20:27.112089    6988 api_server.go:52] waiting for apiserver process to appear ...
	I0318 11:20:27.126495    6988 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 11:20:27.160341    6988 api_server.go:72] duration metric: took 12.3021066s to wait for apiserver process to appear ...
	I0318 11:20:27.160412    6988 api_server.go:88] waiting for apiserver healthz status ...
	I0318 11:20:27.160412    6988 api_server.go:253] Checking apiserver healthz at https://172.25.148.74:8443/healthz ...
	I0318 11:20:27.169273    6988 api_server.go:279] https://172.25.148.74:8443/healthz returned 200:
	ok
	I0318 11:20:27.169998    6988 round_trippers.go:463] GET https://172.25.148.74:8443/version
	I0318 11:20:27.170108    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:27.170108    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:27.170108    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:27.172208    6988 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0318 11:20:27.172208    6988 api_server.go:141] control plane version: v1.28.4
	I0318 11:20:27.172208    6988 api_server.go:131] duration metric: took 11.7955ms to wait for apiserver health ...
	I0318 11:20:27.172208    6988 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 11:20:27.292506    6988 request.go:629] Waited for 119.8573ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods
	I0318 11:20:27.292594    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods
	I0318 11:20:27.292594    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:27.292594    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:27.292713    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:27.304042    6988 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0318 11:20:27.311738    6988 system_pods.go:59] 17 kube-system pods found
	I0318 11:20:27.311738    6988 system_pods.go:61] "coredns-5dd5756b68-jsf9x" [05681724-a32a-40c0-9f26-1c1eb9dffb65] Running
	I0318 11:20:27.311738    6988 system_pods.go:61] "coredns-5dd5756b68-wvh6v" [843ee0ec-fcfd-4763-8c92-acfe93bec900] Running
	I0318 11:20:27.311738    6988 system_pods.go:61] "etcd-ha-606900" [ed704c6d-aba3-496c-9988-c9f86218f1b4] Running
	I0318 11:20:27.311738    6988 system_pods.go:61] "etcd-ha-606900-m02" [a453b1e7-143c-4ea7-a1f4-f6dc6f8aa0b8] Running
	I0318 11:20:27.311738    6988 system_pods.go:61] "kindnet-8977g" [97e55124-90c8-4cda-854c-ee1059fafdac] Running
	I0318 11:20:27.311738    6988 system_pods.go:61] "kindnet-b68s4" [d2b7c03a-1303-4e1d-bf2b-2975716685d6] Running
	I0318 11:20:27.311738    6988 system_pods.go:61] "kube-apiserver-ha-606900" [90f9b505-a404-4227-8a93-8d74ab235009] Running
	I0318 11:20:27.311738    6988 system_pods.go:61] "kube-apiserver-ha-606900-m02" [b3373a21-b66f-42c9-a088-97e3a86cd9fd] Running
	I0318 11:20:27.311738    6988 system_pods.go:61] "kube-controller-manager-ha-606900" [d3660558-d0d0-430f-baeb-912cef1a751f] Running
	I0318 11:20:27.311738    6988 system_pods.go:61] "kube-controller-manager-ha-606900-m02" [93c8139a-db05-4492-a62d-13ecabdadab6] Running
	I0318 11:20:27.311738    6988 system_pods.go:61] "kube-proxy-fk4wg" [3b8fe48c-5035-4e97-9a79-73907e53d2ef] Running
	I0318 11:20:27.312264    6988 system_pods.go:61] "kube-proxy-s9lzf" [c0ba2c37-0dea-43c1-b2d4-ce36b6f6e9ff] Running
	I0318 11:20:27.312264    6988 system_pods.go:61] "kube-scheduler-ha-606900" [6efc4fea-f6fe-4057-96b0-fd62ba3aba5d] Running
	I0318 11:20:27.312264    6988 system_pods.go:61] "kube-scheduler-ha-606900-m02" [f1646aeb-90ea-46f7-a0f9-28b3b68f341c] Running
	I0318 11:20:27.312302    6988 system_pods.go:61] "kube-vip-ha-606900" [540ec4bc-f9bc-4710-be1e-bb289e8cbea4] Running
	I0318 11:20:27.312302    6988 system_pods.go:61] "kube-vip-ha-606900-m02" [9063c185-9922-4ca7-82df-34db3af5f0be] Running
	I0318 11:20:27.312302    6988 system_pods.go:61] "storage-provisioner" [d03b3748-8b89-4a55-9e0e-871a5b79532f] Running
	I0318 11:20:27.312302    6988 system_pods.go:74] duration metric: took 140.0933ms to wait for pod list to return data ...
	I0318 11:20:27.312346    6988 default_sa.go:34] waiting for default service account to be created ...
	I0318 11:20:27.495595    6988 request.go:629] Waited for 183.2264ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/default/serviceaccounts
	I0318 11:20:27.495907    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/default/serviceaccounts
	I0318 11:20:27.495907    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:27.495907    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:27.495907    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:27.505986    6988 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0318 11:20:27.507397    6988 default_sa.go:45] found service account: "default"
	I0318 11:20:27.507457    6988 default_sa.go:55] duration metric: took 195.0884ms for default service account to be created ...
	I0318 11:20:27.507457    6988 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 11:20:27.698288    6988 request.go:629] Waited for 190.4253ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods
	I0318 11:20:27.698510    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods
	I0318 11:20:27.698702    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:27.698715    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:27.698715    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:27.707042    6988 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 11:20:27.713449    6988 system_pods.go:86] 17 kube-system pods found
	I0318 11:20:27.713449    6988 system_pods.go:89] "coredns-5dd5756b68-jsf9x" [05681724-a32a-40c0-9f26-1c1eb9dffb65] Running
	I0318 11:20:27.713449    6988 system_pods.go:89] "coredns-5dd5756b68-wvh6v" [843ee0ec-fcfd-4763-8c92-acfe93bec900] Running
	I0318 11:20:27.713449    6988 system_pods.go:89] "etcd-ha-606900" [ed704c6d-aba3-496c-9988-c9f86218f1b4] Running
	I0318 11:20:27.713449    6988 system_pods.go:89] "etcd-ha-606900-m02" [a453b1e7-143c-4ea7-a1f4-f6dc6f8aa0b8] Running
	I0318 11:20:27.713449    6988 system_pods.go:89] "kindnet-8977g" [97e55124-90c8-4cda-854c-ee1059fafdac] Running
	I0318 11:20:27.713449    6988 system_pods.go:89] "kindnet-b68s4" [d2b7c03a-1303-4e1d-bf2b-2975716685d6] Running
	I0318 11:20:27.713449    6988 system_pods.go:89] "kube-apiserver-ha-606900" [90f9b505-a404-4227-8a93-8d74ab235009] Running
	I0318 11:20:27.713449    6988 system_pods.go:89] "kube-apiserver-ha-606900-m02" [b3373a21-b66f-42c9-a088-97e3a86cd9fd] Running
	I0318 11:20:27.713449    6988 system_pods.go:89] "kube-controller-manager-ha-606900" [d3660558-d0d0-430f-baeb-912cef1a751f] Running
	I0318 11:20:27.713449    6988 system_pods.go:89] "kube-controller-manager-ha-606900-m02" [93c8139a-db05-4492-a62d-13ecabdadab6] Running
	I0318 11:20:27.713449    6988 system_pods.go:89] "kube-proxy-fk4wg" [3b8fe48c-5035-4e97-9a79-73907e53d2ef] Running
	I0318 11:20:27.713449    6988 system_pods.go:89] "kube-proxy-s9lzf" [c0ba2c37-0dea-43c1-b2d4-ce36b6f6e9ff] Running
	I0318 11:20:27.713449    6988 system_pods.go:89] "kube-scheduler-ha-606900" [6efc4fea-f6fe-4057-96b0-fd62ba3aba5d] Running
	I0318 11:20:27.713449    6988 system_pods.go:89] "kube-scheduler-ha-606900-m02" [f1646aeb-90ea-46f7-a0f9-28b3b68f341c] Running
	I0318 11:20:27.713449    6988 system_pods.go:89] "kube-vip-ha-606900" [540ec4bc-f9bc-4710-be1e-bb289e8cbea4] Running
	I0318 11:20:27.713449    6988 system_pods.go:89] "kube-vip-ha-606900-m02" [9063c185-9922-4ca7-82df-34db3af5f0be] Running
	I0318 11:20:27.713449    6988 system_pods.go:89] "storage-provisioner" [d03b3748-8b89-4a55-9e0e-871a5b79532f] Running
	I0318 11:20:27.713449    6988 system_pods.go:126] duration metric: took 205.9906ms to wait for k8s-apps to be running ...
	I0318 11:20:27.713449    6988 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 11:20:27.724669    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 11:20:27.753246    6988 system_svc.go:56] duration metric: took 39.7973ms WaitForService to wait for kubelet
	I0318 11:20:27.753246    6988 kubeadm.go:576] duration metric: took 12.8954846s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 11:20:27.753246    6988 node_conditions.go:102] verifying NodePressure condition ...
	I0318 11:20:27.901397    6988 request.go:629] Waited for 147.9291ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes
	I0318 11:20:27.901601    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes
	I0318 11:20:27.901601    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:27.901601    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:27.901601    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:27.906444    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:20:27.908161    6988 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 11:20:27.908161    6988 node_conditions.go:123] node cpu capacity is 2
	I0318 11:20:27.908161    6988 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 11:20:27.908161    6988 node_conditions.go:123] node cpu capacity is 2
	I0318 11:20:27.908161    6988 node_conditions.go:105] duration metric: took 154.9138ms to run NodePressure ...
	I0318 11:20:27.908161    6988 start.go:240] waiting for startup goroutines ...
	I0318 11:20:27.908161    6988 start.go:254] writing updated cluster config ...
	I0318 11:20:27.911842    6988 out.go:177] 
	I0318 11:20:27.927754    6988 config.go:182] Loaded profile config "ha-606900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:20:27.927754    6988 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\config.json ...
	I0318 11:20:27.933717    6988 out.go:177] * Starting "ha-606900-m03" control-plane node in "ha-606900" cluster
	I0318 11:20:27.938223    6988 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 11:20:27.938223    6988 cache.go:56] Caching tarball of preloaded images
	I0318 11:20:27.938223    6988 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0318 11:20:27.938863    6988 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 11:20:27.938863    6988 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\config.json ...
	I0318 11:20:27.943429    6988 start.go:360] acquireMachinesLock for ha-606900-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 11:20:27.944088    6988 start.go:364] duration metric: took 122.3µs to acquireMachinesLock for "ha-606900-m03"
	I0318 11:20:27.944268    6988 start.go:93] Provisioning new machine with config: &{Name:ha-606900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.28.4 ClusterName:ha-606900 Namespace:default APIServerHAVIP:172.25.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.148.74 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.148.106 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 11:20:27.944268    6988 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0318 11:20:27.948449    6988 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 11:20:27.948502    6988 start.go:159] libmachine.API.Create for "ha-606900" (driver="hyperv")
	I0318 11:20:27.948502    6988 client.go:168] LocalClient.Create starting
	I0318 11:20:27.949096    6988 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0318 11:20:27.949096    6988 main.go:141] libmachine: Decoding PEM data...
	I0318 11:20:27.949096    6988 main.go:141] libmachine: Parsing certificate...
	I0318 11:20:27.949783    6988 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0318 11:20:27.949987    6988 main.go:141] libmachine: Decoding PEM data...
	I0318 11:20:27.950020    6988 main.go:141] libmachine: Parsing certificate...
	I0318 11:20:27.950162    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0318 11:20:29.923571    6988 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0318 11:20:29.923571    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:20:29.923819    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0318 11:20:31.723463    6988 main.go:141] libmachine: [stdout =====>] : False
	
	I0318 11:20:31.723463    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:20:31.723463    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0318 11:20:33.274643    6988 main.go:141] libmachine: [stdout =====>] : True
	
	I0318 11:20:33.275646    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:20:33.276032    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0318 11:20:37.189312    6988 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0318 11:20:37.189312    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:20:37.192268    6988 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0318 11:20:37.647554    6988 main.go:141] libmachine: Creating SSH key...
	I0318 11:20:38.036701    6988 main.go:141] libmachine: Creating VM...
	I0318 11:20:38.036701    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0318 11:20:41.062613    6988 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0318 11:20:41.063650    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:20:41.063721    6988 main.go:141] libmachine: Using switch "Default Switch"
	I0318 11:20:41.063721    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0318 11:20:42.952451    6988 main.go:141] libmachine: [stdout =====>] : True
	
	I0318 11:20:42.952451    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:20:42.952634    6988 main.go:141] libmachine: Creating VHD
	I0318 11:20:42.952722    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0318 11:20:46.899827    6988 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 3309E719-3820-4D3C-8654-837596809030
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0318 11:20:46.899827    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:20:46.899827    6988 main.go:141] libmachine: Writing magic tar header
	I0318 11:20:46.899827    6988 main.go:141] libmachine: Writing SSH key tar header
	I0318 11:20:46.909833    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0318 11:20:50.169729    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:20:50.169875    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:20:50.169875    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m03\disk.vhd' -SizeBytes 20000MB
	I0318 11:20:52.826935    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:20:52.826935    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:20:52.827050    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-606900-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0318 11:20:56.640924    6988 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-606900-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0318 11:20:56.641023    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:20:56.641129    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-606900-m03 -DynamicMemoryEnabled $false
	I0318 11:20:58.963906    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:20:58.963975    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:20:58.963975    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-606900-m03 -Count 2
	I0318 11:21:01.231007    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:21:01.231007    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:21:01.231007    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-606900-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m03\boot2docker.iso'
	I0318 11:21:03.922032    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:21:03.922915    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:21:03.923035    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-606900-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m03\disk.vhd'
	I0318 11:21:06.697476    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:21:06.697476    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:21:06.697476    6988 main.go:141] libmachine: Starting VM...
	I0318 11:21:06.698503    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-606900-m03
	I0318 11:21:09.923001    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:21:09.923001    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:21:09.923001    6988 main.go:141] libmachine: Waiting for host to start...
	I0318 11:21:09.923001    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m03 ).state
	I0318 11:21:12.270508    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:21:12.270508    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:21:12.270508    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:21:14.894863    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:21:14.894904    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:21:15.900157    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m03 ).state
	I0318 11:21:18.151958    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:21:18.151958    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:21:18.151958    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:21:20.775252    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:21:20.775252    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:21:21.780155    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m03 ).state
	I0318 11:21:24.049999    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:21:24.049999    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:21:24.049999    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:21:26.702930    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:21:26.702930    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:21:27.705184    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m03 ).state
	I0318 11:21:29.968246    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:21:29.968463    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:21:29.968544    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:21:32.575603    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:21:32.575603    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:21:33.578210    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m03 ).state
	I0318 11:21:35.897937    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:21:35.897937    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:21:35.898208    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:21:38.582343    6988 main.go:141] libmachine: [stdout =====>] : 172.25.158.182
	
	I0318 11:21:38.582343    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:21:38.583009    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m03 ).state
	I0318 11:21:40.821436    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:21:40.821436    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:21:40.821436    6988 machine.go:94] provisionDockerMachine start ...
	I0318 11:21:40.821436    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m03 ).state
	I0318 11:21:43.083303    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:21:43.083303    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:21:43.083303    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:21:45.826412    6988 main.go:141] libmachine: [stdout =====>] : 172.25.158.182
	
	I0318 11:21:45.826412    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:21:45.832315    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:21:45.844077    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.158.182 22 <nil> <nil>}
	I0318 11:21:45.844077    6988 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 11:21:45.979980    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 11:21:45.979980    6988 buildroot.go:166] provisioning hostname "ha-606900-m03"
	I0318 11:21:45.980328    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m03 ).state
	I0318 11:21:48.203719    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:21:48.204715    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:21:48.204770    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:21:50.873609    6988 main.go:141] libmachine: [stdout =====>] : 172.25.158.182
	
	I0318 11:21:50.873609    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:21:50.879404    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:21:50.880149    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.158.182 22 <nil> <nil>}
	I0318 11:21:50.880149    6988 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-606900-m03 && echo "ha-606900-m03" | sudo tee /etc/hostname
	I0318 11:21:51.036398    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-606900-m03
	
	I0318 11:21:51.036398    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m03 ).state
	I0318 11:21:53.251539    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:21:53.251689    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:21:53.251767    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:21:55.896601    6988 main.go:141] libmachine: [stdout =====>] : 172.25.158.182
	
	I0318 11:21:55.896601    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:21:55.906178    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:21:55.906178    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.158.182 22 <nil> <nil>}
	I0318 11:21:55.906178    6988 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-606900-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-606900-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-606900-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 11:21:56.046084    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 11:21:56.046084    6988 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0318 11:21:56.046084    6988 buildroot.go:174] setting up certificates
	I0318 11:21:56.046084    6988 provision.go:84] configureAuth start
	I0318 11:21:56.046084    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m03 ).state
	I0318 11:21:58.230359    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:21:58.230359    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:21:58.230457    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:22:00.886157    6988 main.go:141] libmachine: [stdout =====>] : 172.25.158.182
	
	I0318 11:22:00.886231    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:00.886337    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m03 ).state
	I0318 11:22:03.118631    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:22:03.118631    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:03.118631    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:22:05.789457    6988 main.go:141] libmachine: [stdout =====>] : 172.25.158.182
	
	I0318 11:22:05.789457    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:05.789457    6988 provision.go:143] copyHostCerts
	I0318 11:22:05.790514    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0318 11:22:05.790748    6988 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0318 11:22:05.790853    6988 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0318 11:22:05.791137    6988 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0318 11:22:05.792194    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0318 11:22:05.792415    6988 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0318 11:22:05.792415    6988 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0318 11:22:05.792841    6988 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0318 11:22:05.793953    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0318 11:22:05.794226    6988 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0318 11:22:05.794226    6988 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0318 11:22:05.794679    6988 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0318 11:22:05.795757    6988 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-606900-m03 san=[127.0.0.1 172.25.158.182 ha-606900-m03 localhost minikube]
	I0318 11:22:06.001932    6988 provision.go:177] copyRemoteCerts
	I0318 11:22:06.014870    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 11:22:06.015006    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m03 ).state
	I0318 11:22:08.184064    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:22:08.184252    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:08.184252    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:22:10.832057    6988 main.go:141] libmachine: [stdout =====>] : 172.25.158.182
	
	I0318 11:22:10.832057    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:10.832255    6988 sshutil.go:53] new ssh client: &{IP:172.25.158.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m03\id_rsa Username:docker}
	I0318 11:22:10.951013    6988 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9360071s)
	I0318 11:22:10.951047    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0318 11:22:10.951192    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 11:22:10.999541    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0318 11:22:10.999946    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0318 11:22:11.052041    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0318 11:22:11.052041    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 11:22:11.104327    6988 provision.go:87] duration metric: took 15.0580927s to configureAuth
	I0318 11:22:11.104538    6988 buildroot.go:189] setting minikube options for container-runtime
	I0318 11:22:11.105309    6988 config.go:182] Loaded profile config "ha-606900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:22:11.105359    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m03 ).state
	I0318 11:22:13.347821    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:22:13.347821    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:13.347896    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:22:16.030698    6988 main.go:141] libmachine: [stdout =====>] : 172.25.158.182
	
	I0318 11:22:16.031465    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:16.039118    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:22:16.039524    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.158.182 22 <nil> <nil>}
	I0318 11:22:16.039524    6988 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0318 11:22:16.167931    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0318 11:22:16.167931    6988 buildroot.go:70] root file system type: tmpfs
	I0318 11:22:16.168258    6988 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0318 11:22:16.168258    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m03 ).state
	I0318 11:22:18.403909    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:22:18.404131    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:18.404131    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:22:21.013644    6988 main.go:141] libmachine: [stdout =====>] : 172.25.158.182
	
	I0318 11:22:21.013644    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:21.021130    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:22:21.021862    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.158.182 22 <nil> <nil>}
	I0318 11:22:21.021862    6988 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.25.148.74"
	Environment="NO_PROXY=172.25.148.74,172.25.148.106"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0318 11:22:21.179788    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.25.148.74
	Environment=NO_PROXY=172.25.148.74,172.25.148.106
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0318 11:22:21.179847    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m03 ).state
	I0318 11:22:23.397822    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:22:23.398018    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:23.398018    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:22:26.040210    6988 main.go:141] libmachine: [stdout =====>] : 172.25.158.182
	
	I0318 11:22:26.040210    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:26.048683    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:22:26.049557    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.158.182 22 <nil> <nil>}
	I0318 11:22:26.049557    6988 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0318 11:22:28.287260    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0318 11:22:28.287260    6988 machine.go:97] duration metric: took 47.4655249s to provisionDockerMachine
	I0318 11:22:28.287260    6988 client.go:171] duration metric: took 2m0.3380004s to LocalClient.Create
	I0318 11:22:28.287260    6988 start.go:167] duration metric: took 2m0.3380004s to libmachine.API.Create "ha-606900"
	I0318 11:22:28.287260    6988 start.go:293] postStartSetup for "ha-606900-m03" (driver="hyperv")
	I0318 11:22:28.287260    6988 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 11:22:28.300681    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 11:22:28.300681    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m03 ).state
	I0318 11:22:30.492048    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:22:30.492048    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:30.492318    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:22:33.181857    6988 main.go:141] libmachine: [stdout =====>] : 172.25.158.182
	
	I0318 11:22:33.182441    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:33.182964    6988 sshutil.go:53] new ssh client: &{IP:172.25.158.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m03\id_rsa Username:docker}
	I0318 11:22:33.287479    6988 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9867662s)
	I0318 11:22:33.299948    6988 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 11:22:33.307289    6988 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 11:22:33.307289    6988 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0318 11:22:33.307732    6988 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0318 11:22:33.308752    6988 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem -> 91202.pem in /etc/ssl/certs
	I0318 11:22:33.308811    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem -> /etc/ssl/certs/91202.pem
	I0318 11:22:33.321318    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 11:22:33.339157    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem --> /etc/ssl/certs/91202.pem (1708 bytes)
	I0318 11:22:33.391038    6988 start.go:296] duration metric: took 5.1037457s for postStartSetup
	I0318 11:22:33.393884    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m03 ).state
	I0318 11:22:35.618187    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:22:35.618187    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:35.618469    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:22:38.308584    6988 main.go:141] libmachine: [stdout =====>] : 172.25.158.182
	
	I0318 11:22:38.308767    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:38.308820    6988 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\config.json ...
	I0318 11:22:38.311584    6988 start.go:128] duration metric: took 2m10.3664941s to createHost
	I0318 11:22:38.311584    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m03 ).state
	I0318 11:22:40.531065    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:22:40.531065    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:40.531065    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:22:43.197980    6988 main.go:141] libmachine: [stdout =====>] : 172.25.158.182
	
	I0318 11:22:43.197980    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:43.204318    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:22:43.205043    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.158.182 22 <nil> <nil>}
	I0318 11:22:43.205043    6988 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 11:22:43.336205    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710760963.328851647
	
	I0318 11:22:43.336328    6988 fix.go:216] guest clock: 1710760963.328851647
	I0318 11:22:43.336328    6988 fix.go:229] Guest: 2024-03-18 11:22:43.328851647 +0000 UTC Remote: 2024-03-18 11:22:38.3115843 +0000 UTC m=+602.662131001 (delta=5.017267347s)
	I0318 11:22:43.336457    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m03 ).state
	I0318 11:22:45.587494    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:22:45.587494    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:45.588017    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:22:48.347657    6988 main.go:141] libmachine: [stdout =====>] : 172.25.158.182
	
	I0318 11:22:48.347657    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:48.353632    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:22:48.354500    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.158.182 22 <nil> <nil>}
	I0318 11:22:48.354500    6988 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1710760963
	I0318 11:22:48.504095    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Mar 18 11:22:43 UTC 2024
	
	I0318 11:22:48.504095    6988 fix.go:236] clock set: Mon Mar 18 11:22:43 UTC 2024
	 (err=<nil>)
	I0318 11:22:48.504095    6988 start.go:83] releasing machines lock for "ha-606900-m03", held for 2m20.5590698s
	I0318 11:22:48.504095    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m03 ).state
	I0318 11:22:50.718328    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:22:50.718973    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:50.718973    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:22:53.368257    6988 main.go:141] libmachine: [stdout =====>] : 172.25.158.182
	
	I0318 11:22:53.368257    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:53.381342    6988 out.go:177] * Found network options:
	I0318 11:22:53.396915    6988 out.go:177]   - NO_PROXY=172.25.148.74,172.25.148.106
	W0318 11:22:53.400156    6988 proxy.go:119] fail to check proxy env: Error ip not in block
	W0318 11:22:53.400156    6988 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 11:22:53.404740    6988 out.go:177]   - NO_PROXY=172.25.148.74,172.25.148.106
	W0318 11:22:53.407205    6988 proxy.go:119] fail to check proxy env: Error ip not in block
	W0318 11:22:53.407205    6988 proxy.go:119] fail to check proxy env: Error ip not in block
	W0318 11:22:53.408950    6988 proxy.go:119] fail to check proxy env: Error ip not in block
	W0318 11:22:53.408950    6988 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 11:22:53.410951    6988 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 11:22:53.410951    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m03 ).state
	I0318 11:22:53.422278    6988 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0318 11:22:53.422278    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m03 ).state
	I0318 11:22:55.715110    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:22:55.715110    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:55.715110    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:22:55.715790    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:55.715967    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:22:55.715967    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:22:58.447605    6988 main.go:141] libmachine: [stdout =====>] : 172.25.158.182
	
	I0318 11:22:58.447679    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:58.448510    6988 sshutil.go:53] new ssh client: &{IP:172.25.158.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m03\id_rsa Username:docker}
	I0318 11:22:58.476125    6988 main.go:141] libmachine: [stdout =====>] : 172.25.158.182
	
	I0318 11:22:58.476125    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:58.476861    6988 sshutil.go:53] new ssh client: &{IP:172.25.158.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m03\id_rsa Username:docker}
	I0318 11:22:58.542547    6988 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1202368s)
	W0318 11:22:58.542668    6988 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 11:22:58.555611    6988 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 11:22:58.669917    6988 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 11:22:58.669917    6988 start.go:494] detecting cgroup driver to use...
	I0318 11:22:58.669917    6988 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2589336s)
	I0318 11:22:58.669917    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 11:22:58.721676    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0318 11:22:58.756085    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0318 11:22:58.776139    6988 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0318 11:22:58.787613    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0318 11:22:58.822260    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 11:22:58.855740    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0318 11:22:58.889132    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 11:22:58.922883    6988 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 11:22:58.957350    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0318 11:22:58.991608    6988 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 11:22:59.023351    6988 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 11:22:59.053660    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:22:59.257483    6988 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0318 11:22:59.292521    6988 start.go:494] detecting cgroup driver to use...
	I0318 11:22:59.305502    6988 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0318 11:22:59.344972    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 11:22:59.380878    6988 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 11:22:59.426694    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 11:22:59.467523    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 11:22:59.507809    6988 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0318 11:22:59.571519    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 11:22:59.601102    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 11:22:59.660964    6988 ssh_runner.go:195] Run: which cri-dockerd
	I0318 11:22:59.682083    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0318 11:22:59.701838    6988 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0318 11:22:59.749642    6988 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0318 11:22:59.959809    6988 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0318 11:23:00.181846    6988 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0318 11:23:00.181846    6988 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0318 11:23:00.231607    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:23:00.456248    6988 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 11:23:03.009316    6988 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5530514s)
	I0318 11:23:03.022463    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0318 11:23:03.063324    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 11:23:03.105684    6988 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0318 11:23:03.344347    6988 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0318 11:23:03.562510    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:23:03.772708    6988 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0318 11:23:03.820045    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 11:23:03.856020    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:23:04.069467    6988 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0318 11:23:04.182573    6988 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0318 11:23:04.197024    6988 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0318 11:23:04.205719    6988 start.go:562] Will wait 60s for crictl version
	I0318 11:23:04.218716    6988 ssh_runner.go:195] Run: which crictl
	I0318 11:23:04.238554    6988 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 11:23:04.318304    6988 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.4
	RuntimeApiVersion:  v1
	I0318 11:23:04.328277    6988 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 11:23:04.373732    6988 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 11:23:04.412844    6988 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	I0318 11:23:04.416453    6988 out.go:177]   - env NO_PROXY=172.25.148.74
	I0318 11:23:04.418480    6988 out.go:177]   - env NO_PROXY=172.25.148.74,172.25.148.106
	I0318 11:23:04.421170    6988 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0318 11:23:04.427051    6988 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0318 11:23:04.427248    6988 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0318 11:23:04.427248    6988 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0318 11:23:04.427248    6988 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:ae:0d:2c Flags:up|broadcast|multicast|running}
	I0318 11:23:04.430000    6988 ip.go:210] interface addr: fe80::f8a6:d6b6:cc4:1ba0/64
	I0318 11:23:04.430000    6988 ip.go:210] interface addr: 172.25.144.1/20
	I0318 11:23:04.441575    6988 ssh_runner.go:195] Run: grep 172.25.144.1	host.minikube.internal$ /etc/hosts
	I0318 11:23:04.449841    6988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 11:23:04.472895    6988 mustload.go:65] Loading cluster: ha-606900
	I0318 11:23:04.473797    6988 config.go:182] Loaded profile config "ha-606900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:23:04.473850    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:23:06.705787    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:23:06.705787    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:23:06.706169    6988 host.go:66] Checking if "ha-606900" exists ...
	I0318 11:23:06.706982    6988 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900 for IP: 172.25.158.182
	I0318 11:23:06.707036    6988 certs.go:194] generating shared ca certs ...
	I0318 11:23:06.707036    6988 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:23:06.707647    6988 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0318 11:23:06.708029    6988 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0318 11:23:06.708209    6988 certs.go:256] generating profile certs ...
	I0318 11:23:06.708836    6988 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\client.key
	I0318 11:23:06.708942    6988 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.key.f0fc3bf2
	I0318 11:23:06.709339    6988 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.crt.f0fc3bf2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.148.74 172.25.148.106 172.25.158.182 172.25.159.254]
	I0318 11:23:07.067204    6988 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.crt.f0fc3bf2 ...
	I0318 11:23:07.067204    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.crt.f0fc3bf2: {Name:mke127e03e18b4156cbb4926f3348eeff6a27201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:23:07.068565    6988 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.key.f0fc3bf2 ...
	I0318 11:23:07.068565    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.key.f0fc3bf2: {Name:mk15eea5ec06174a1c0fceb7d6b416abc057f9eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:23:07.069335    6988 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.crt.f0fc3bf2 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.crt
	I0318 11:23:07.081976    6988 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.key.f0fc3bf2 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.key
	I0318 11:23:07.083863    6988 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\proxy-client.key
	I0318 11:23:07.083863    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 11:23:07.084086    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0318 11:23:07.084246    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 11:23:07.084304    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 11:23:07.084544    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 11:23:07.084683    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 11:23:07.084854    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 11:23:07.085111    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 11:23:07.085504    6988 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9120.pem (1338 bytes)
	W0318 11:23:07.085504    6988 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9120_empty.pem, impossibly tiny 0 bytes
	I0318 11:23:07.085504    6988 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0318 11:23:07.086266    6988 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0318 11:23:07.086540    6988 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0318 11:23:07.086808    6988 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0318 11:23:07.087184    6988 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem (1708 bytes)
	I0318 11:23:07.087645    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9120.pem -> /usr/share/ca-certificates/9120.pem
	I0318 11:23:07.087869    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem -> /usr/share/ca-certificates/91202.pem
	I0318 11:23:07.087990    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:23:07.088251    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:23:09.300730    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:23:09.300730    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:23:09.301496    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:23:11.958320    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:23:11.958826    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:23:11.959476    6988 sshutil.go:53] new ssh client: &{IP:172.25.148.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900\id_rsa Username:docker}
	I0318 11:23:12.065123    6988 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0318 11:23:12.073902    6988 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0318 11:23:12.108498    6988 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0318 11:23:12.117038    6988 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0318 11:23:12.152206    6988 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0318 11:23:12.159997    6988 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0318 11:23:12.194189    6988 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0318 11:23:12.200998    6988 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0318 11:23:12.239228    6988 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0318 11:23:12.249008    6988 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0318 11:23:12.284817    6988 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0318 11:23:12.292056    6988 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0318 11:23:12.313529    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 11:23:12.363711    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 11:23:12.411542    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 11:23:12.467415    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 11:23:12.517956    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0318 11:23:12.569831    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 11:23:12.622903    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 11:23:12.673723    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 11:23:12.724133    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9120.pem --> /usr/share/ca-certificates/9120.pem (1338 bytes)
	I0318 11:23:12.777685    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem --> /usr/share/ca-certificates/91202.pem (1708 bytes)
	I0318 11:23:12.827364    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 11:23:12.878421    6988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0318 11:23:12.919672    6988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0318 11:23:12.955265    6988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0318 11:23:12.990224    6988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0318 11:23:13.028086    6988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0318 11:23:13.063599    6988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0318 11:23:13.099173    6988 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0318 11:23:13.147686    6988 ssh_runner.go:195] Run: openssl version
	I0318 11:23:13.171300    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9120.pem && ln -fs /usr/share/ca-certificates/9120.pem /etc/ssl/certs/9120.pem"
	I0318 11:23:13.209733    6988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9120.pem
	I0318 11:23:13.218169    6988 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 10:53 /usr/share/ca-certificates/9120.pem
	I0318 11:23:13.231916    6988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9120.pem
	I0318 11:23:13.262934    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9120.pem /etc/ssl/certs/51391683.0"
	I0318 11:23:13.296979    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/91202.pem && ln -fs /usr/share/ca-certificates/91202.pem /etc/ssl/certs/91202.pem"
	I0318 11:23:13.335126    6988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91202.pem
	I0318 11:23:13.342728    6988 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 10:53 /usr/share/ca-certificates/91202.pem
	I0318 11:23:13.357209    6988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91202.pem
	I0318 11:23:13.379151    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/91202.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 11:23:13.412909    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 11:23:13.450314    6988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:23:13.458373    6988 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:23:13.473287    6988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:23:13.495188    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 11:23:13.532701    6988 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 11:23:13.540759    6988 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 11:23:13.540985    6988 kubeadm.go:928] updating node {m03 172.25.158.182 8443 v1.28.4 docker true true} ...
	I0318 11:23:13.541417    6988 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-606900-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.158.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-606900 Namespace:default APIServerHAVIP:172.25.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 11:23:13.541497    6988 kube-vip.go:111] generating kube-vip config ...
	I0318 11:23:13.552707    6988 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0318 11:23:13.584416    6988 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0318 11:23:13.584573    6988 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.25.159.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0318 11:23:13.598170    6988 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 11:23:13.614842    6988 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0318 11:23:13.628544    6988 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0318 11:23:13.649118    6988 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256
	I0318 11:23:13.649118    6988 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0318 11:23:13.649329    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0318 11:23:13.649329    6988 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256
	I0318 11:23:13.649329    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0318 11:23:13.664982    6988 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0318 11:23:13.664982    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 11:23:13.666223    6988 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0318 11:23:13.678567    6988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0318 11:23:13.678742    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0318 11:23:13.711300    6988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0318 11:23:13.711381    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0318 11:23:13.711656    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0318 11:23:13.725685    6988 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0318 11:23:13.810979    6988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0318 11:23:13.811109    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0318 11:23:15.127301    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0318 11:23:15.149209    6988 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0318 11:23:15.185145    6988 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 11:23:15.226796    6988 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0318 11:23:15.288351    6988 ssh_runner.go:195] Run: grep 172.25.159.254	control-plane.minikube.internal$ /etc/hosts
	I0318 11:23:15.296743    6988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.159.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 11:23:15.338387    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:23:15.577185    6988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 11:23:15.615011    6988 host.go:66] Checking if "ha-606900" exists ...
	I0318 11:23:15.615790    6988 start.go:316] joinCluster: &{Name:ha-606900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-606900 Namespace:default APIServerHAVIP:172.25.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.148.74 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.148.106 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.25.158.182 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 11:23:15.615963    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0318 11:23:15.616066    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:23:17.860502    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:23:17.860713    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:23:17.860713    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:23:20.521644    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:23:20.522396    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:23:20.523061    6988 sshutil.go:53] new ssh client: &{IP:172.25.148.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900\id_rsa Username:docker}
	I0318 11:23:20.765196    6988 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (5.1491515s)
	I0318 11:23:20.765196    6988 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.25.158.182 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 11:23:20.766130    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8p5tel.vtprgb1klrtlzuvh --discovery-token-ca-cert-hash sha256:1315b336657f971045d436062c4002c5bfe51c3e72afc075449943f75abc0cef --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-606900-m03 --control-plane --apiserver-advertise-address=172.25.158.182 --apiserver-bind-port=8443"
	I0318 11:24:08.534489    6988 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8p5tel.vtprgb1klrtlzuvh --discovery-token-ca-cert-hash sha256:1315b336657f971045d436062c4002c5bfe51c3e72afc075449943f75abc0cef --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-606900-m03 --control-plane --apiserver-advertise-address=172.25.158.182 --apiserver-bind-port=8443": (47.7680595s)
	I0318 11:24:08.534489    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0318 11:24:09.436469    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-606900-m03 minikube.k8s.io/updated_at=2024_03_18T11_24_09_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd minikube.k8s.io/name=ha-606900 minikube.k8s.io/primary=false
	I0318 11:24:09.613944    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-606900-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0318 11:24:10.072204    6988 start.go:318] duration metric: took 54.4561284s to joinCluster
	I0318 11:24:10.072359    6988 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.25.158.182 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 11:24:10.076914    6988 out.go:177] * Verifying Kubernetes components...
	I0318 11:24:10.073395    6988 config.go:182] Loaded profile config "ha-606900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:24:10.096188    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:24:10.528110    6988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 11:24:10.577111    6988 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0318 11:24:10.578254    6988 kapi.go:59] client config for ha-606900: &rest.Config{Host:"https://172.25.159.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-606900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-606900\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x226b2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0318 11:24:10.578254    6988 kubeadm.go:477] Overriding stale ClientConfig host https://172.25.159.254:8443 with https://172.25.148.74:8443
	I0318 11:24:10.579136    6988 node_ready.go:35] waiting up to 6m0s for node "ha-606900-m03" to be "Ready" ...
	I0318 11:24:10.579136    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:10.579136    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:10.579136    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:10.579136    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:10.605538    6988 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0318 11:24:11.088130    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:11.088130    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:11.088386    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:11.088386    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:11.093677    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:11.579713    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:11.579785    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:11.579785    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:11.579785    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:11.586517    6988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 11:24:12.086411    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:12.086411    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:12.086411    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:12.086411    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:12.097243    6988 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0318 11:24:12.595095    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:12.595095    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:12.595095    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:12.595095    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:12.601867    6988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 11:24:12.602213    6988 node_ready.go:53] node "ha-606900-m03" has status "Ready":"False"
	I0318 11:24:13.086880    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:13.086965    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:13.086965    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:13.086965    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:13.094809    6988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 11:24:13.591099    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:13.591099    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:13.591099    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:13.591099    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:13.595668    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:24:14.082011    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:14.082011    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:14.082531    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:14.082531    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:14.086919    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:24:14.592474    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:14.592474    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:14.592759    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:14.592791    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:14.597584    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:24:15.084685    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:15.085038    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:15.085141    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:15.085141    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:15.092416    6988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 11:24:15.093222    6988 node_ready.go:53] node "ha-606900-m03" has status "Ready":"False"
	I0318 11:24:15.590205    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:15.590276    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:15.590276    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:15.590276    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:15.595893    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:16.080119    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:16.080119    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:16.080119    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:16.080119    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:16.166988    6988 round_trippers.go:574] Response Status: 200 OK in 86 milliseconds
	I0318 11:24:16.586550    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:16.586550    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:16.586550    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:16.586550    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:16.591759    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:17.094918    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:17.094918    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:17.094918    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:17.094918    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:17.101139    6988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 11:24:17.102133    6988 node_ready.go:53] node "ha-606900-m03" has status "Ready":"False"
	I0318 11:24:17.587783    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:17.588005    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:17.588005    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:17.588005    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:17.593675    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:18.081249    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:18.081508    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:18.081508    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:18.081508    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:18.086859    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:18.591204    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:18.591385    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:18.591385    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:18.591385    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:18.597053    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:19.085534    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:19.085534    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:19.085534    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:19.085534    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:19.091223    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:19.579564    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:19.579809    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:19.579809    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:19.579809    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:19.586021    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:19.587117    6988 node_ready.go:49] node "ha-606900-m03" has status "Ready":"True"
	I0318 11:24:19.587117    6988 node_ready.go:38] duration metric: took 9.0079248s for node "ha-606900-m03" to be "Ready" ...
	I0318 11:24:19.587183    6988 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 11:24:19.587318    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods
	I0318 11:24:19.587318    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:19.587318    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:19.587318    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:19.597949    6988 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0318 11:24:19.611894    6988 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-jsf9x" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:19.611894    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-jsf9x
	I0318 11:24:19.611894    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:19.611894    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:19.611894    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:19.617929    6988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 11:24:19.618544    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900
	I0318 11:24:19.618544    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:19.618544    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:19.618544    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:19.623366    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:24:19.623656    6988 pod_ready.go:92] pod "coredns-5dd5756b68-jsf9x" in "kube-system" namespace has status "Ready":"True"
	I0318 11:24:19.623656    6988 pod_ready.go:81] duration metric: took 11.7622ms for pod "coredns-5dd5756b68-jsf9x" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:19.623656    6988 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-wvh6v" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:19.624241    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wvh6v
	I0318 11:24:19.624241    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:19.624322    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:19.624322    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:19.628286    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 11:24:19.629687    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900
	I0318 11:24:19.629687    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:19.629773    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:19.629773    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:19.634072    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:24:19.634893    6988 pod_ready.go:92] pod "coredns-5dd5756b68-wvh6v" in "kube-system" namespace has status "Ready":"True"
	I0318 11:24:19.634893    6988 pod_ready.go:81] duration metric: took 11.2365ms for pod "coredns-5dd5756b68-wvh6v" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:19.634951    6988 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-606900" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:19.634997    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-606900
	I0318 11:24:19.635096    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:19.635096    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:19.635138    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:19.639912    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:24:19.640575    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900
	I0318 11:24:19.640575    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:19.640575    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:19.640575    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:19.645397    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:24:19.646046    6988 pod_ready.go:92] pod "etcd-ha-606900" in "kube-system" namespace has status "Ready":"True"
	I0318 11:24:19.646046    6988 pod_ready.go:81] duration metric: took 11.0957ms for pod "etcd-ha-606900" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:19.646046    6988 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-606900-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:19.646163    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-606900-m02
	I0318 11:24:19.646260    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:19.646260    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:19.646260    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:19.650852    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:24:19.651674    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:24:19.651674    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:19.651674    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:19.651674    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:19.657482    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:19.658884    6988 pod_ready.go:92] pod "etcd-ha-606900-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 11:24:19.658884    6988 pod_ready.go:81] duration metric: took 12.8377ms for pod "etcd-ha-606900-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:19.658884    6988 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-606900-m03" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:19.783310    6988 request.go:629] Waited for 124.3174ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-606900-m03
	I0318 11:24:19.783310    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-606900-m03
	I0318 11:24:19.783310    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:19.783310    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:19.783310    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:19.791132    6988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 11:24:19.984878    6988 request.go:629] Waited for 192.3614ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:19.985094    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:19.985094    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:19.985094    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:19.985094    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:19.990916    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:20.185620    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-606900-m03
	I0318 11:24:20.185705    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:20.185705    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:20.185705    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:20.191252    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:20.387134    6988 request.go:629] Waited for 194.7598ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:20.387339    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:20.387339    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:20.387456    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:20.387456    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:20.392676    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:20.666982    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-606900-m03
	I0318 11:24:20.666982    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:20.666982    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:20.666982    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:20.674738    6988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 11:24:20.790527    6988 request.go:629] Waited for 113.9058ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:20.790583    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:20.790583    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:20.790583    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:20.790583    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:20.798128    6988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 11:24:21.166627    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-606900-m03
	I0318 11:24:21.166716    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:21.166716    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:21.166716    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:21.174258    6988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 11:24:21.181642    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:21.181642    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:21.181642    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:21.181642    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:21.186257    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:24:21.667385    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-606900-m03
	I0318 11:24:21.667385    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:21.667385    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:21.667385    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:21.672860    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:24:21.673871    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:21.673871    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:21.673871    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:21.673871    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:21.678140    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:24:21.679398    6988 pod_ready.go:92] pod "etcd-ha-606900-m03" in "kube-system" namespace has status "Ready":"True"
	I0318 11:24:21.679986    6988 pod_ready.go:81] duration metric: took 2.0210885s for pod "etcd-ha-606900-m03" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:21.679986    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-606900" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:21.793593    6988 request.go:629] Waited for 113.607ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-606900
	I0318 11:24:21.793593    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-606900
	I0318 11:24:21.793593    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:21.793593    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:21.793931    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:21.798280    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:24:21.982397    6988 request.go:629] Waited for 182.6611ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900
	I0318 11:24:21.982837    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900
	I0318 11:24:21.982902    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:21.982929    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:21.982929    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:21.988501    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:24:21.989244    6988 pod_ready.go:92] pod "kube-apiserver-ha-606900" in "kube-system" namespace has status "Ready":"True"
	I0318 11:24:21.989302    6988 pod_ready.go:81] duration metric: took 309.2561ms for pod "kube-apiserver-ha-606900" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:21.989302    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-606900-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:22.188565    6988 request.go:629] Waited for 199.1114ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-606900-m02
	I0318 11:24:22.188565    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-606900-m02
	I0318 11:24:22.188565    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:22.188565    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:22.188565    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:22.195195    6988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 11:24:22.393094    6988 request.go:629] Waited for 196.9142ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:24:22.393441    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:24:22.393441    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:22.393441    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:22.393441    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:22.399317    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:22.399922    6988 pod_ready.go:92] pod "kube-apiserver-ha-606900-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 11:24:22.400716    6988 pod_ready.go:81] duration metric: took 411.3576ms for pod "kube-apiserver-ha-606900-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:22.400716    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-606900-m03" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:22.594940    6988 request.go:629] Waited for 193.605ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-606900-m03
	I0318 11:24:22.595028    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-606900-m03
	I0318 11:24:22.595028    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:22.595028    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:22.595028    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:22.599774    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:24:22.781815    6988 request.go:629] Waited for 180.1839ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:22.781886    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:22.781958    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:22.781958    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:22.781958    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:22.786732    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:24:22.788513    6988 pod_ready.go:92] pod "kube-apiserver-ha-606900-m03" in "kube-system" namespace has status "Ready":"True"
	I0318 11:24:22.788513    6988 pod_ready.go:81] duration metric: took 387.7946ms for pod "kube-apiserver-ha-606900-m03" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:22.788571    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-606900" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:22.988567    6988 request.go:629] Waited for 199.7284ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-606900
	I0318 11:24:22.988749    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-606900
	I0318 11:24:22.988749    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:22.988988    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:22.988988    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:22.998357    6988 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0318 11:24:23.195084    6988 request.go:629] Waited for 194.5145ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900
	I0318 11:24:23.195367    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900
	I0318 11:24:23.195367    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:23.195367    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:23.195367    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:23.200714    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:23.202006    6988 pod_ready.go:92] pod "kube-controller-manager-ha-606900" in "kube-system" namespace has status "Ready":"True"
	I0318 11:24:23.202159    6988 pod_ready.go:81] duration metric: took 413.5845ms for pod "kube-controller-manager-ha-606900" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:23.202159    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-606900-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:23.380908    6988 request.go:629] Waited for 178.6364ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-606900-m02
	I0318 11:24:23.381059    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-606900-m02
	I0318 11:24:23.381059    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:23.381059    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:23.381059    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:23.386669    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:23.584065    6988 request.go:629] Waited for 196.3969ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:24:23.584192    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:24:23.584192    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:23.584192    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:23.584389    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:23.591870    6988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 11:24:23.592606    6988 pod_ready.go:92] pod "kube-controller-manager-ha-606900-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 11:24:23.592606    6988 pod_ready.go:81] duration metric: took 390.4447ms for pod "kube-controller-manager-ha-606900-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:23.592606    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-606900-m03" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:23.788994    6988 request.go:629] Waited for 195.6736ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-606900-m03
	I0318 11:24:23.788994    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-606900-m03
	I0318 11:24:23.788994    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:23.788994    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:23.788994    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:23.793601    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:24:23.991514    6988 request.go:629] Waited for 195.752ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:23.992235    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:23.992235    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:23.992235    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:23.992433    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:23.996921    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:24:23.998283    6988 pod_ready.go:92] pod "kube-controller-manager-ha-606900-m03" in "kube-system" namespace has status "Ready":"True"
	I0318 11:24:23.998283    6988 pod_ready.go:81] duration metric: took 405.6746ms for pod "kube-controller-manager-ha-606900-m03" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:23.998283    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cjhcj" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:24.193474    6988 request.go:629] Waited for 195.1895ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cjhcj
	I0318 11:24:24.193906    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cjhcj
	I0318 11:24:24.193906    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:24.193906    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:24.193906    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:24.199452    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:24.379874    6988 request.go:629] Waited for 179.2815ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:24.379874    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:24.379874    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:24.379874    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:24.379874    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:24.384821    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:24:24.386279    6988 pod_ready.go:92] pod "kube-proxy-cjhcj" in "kube-system" namespace has status "Ready":"True"
	I0318 11:24:24.386354    6988 pod_ready.go:81] duration metric: took 388.069ms for pod "kube-proxy-cjhcj" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:24.386354    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fk4wg" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:24.584269    6988 request.go:629] Waited for 197.7966ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fk4wg
	I0318 11:24:24.584269    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fk4wg
	I0318 11:24:24.584269    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:24.584269    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:24.584269    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:24.590307    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:24.788999    6988 request.go:629] Waited for 197.731ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900
	I0318 11:24:24.789115    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900
	I0318 11:24:24.789115    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:24.789115    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:24.789115    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:24.794549    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:24.795391    6988 pod_ready.go:92] pod "kube-proxy-fk4wg" in "kube-system" namespace has status "Ready":"True"
	I0318 11:24:24.795391    6988 pod_ready.go:81] duration metric: took 409.0338ms for pod "kube-proxy-fk4wg" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:24.795450    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s9lzf" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:24.993157    6988 request.go:629] Waited for 197.4438ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s9lzf
	I0318 11:24:24.993157    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s9lzf
	I0318 11:24:24.993157    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:24.993157    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:24.993157    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:24.998337    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:25.195530    6988 request.go:629] Waited for 195.8348ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:24:25.195530    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:24:25.195758    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:25.195758    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:25.195758    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:25.201477    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:25.202484    6988 pod_ready.go:92] pod "kube-proxy-s9lzf" in "kube-system" namespace has status "Ready":"True"
	I0318 11:24:25.202484    6988 pod_ready.go:81] duration metric: took 407.0312ms for pod "kube-proxy-s9lzf" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:25.202484    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-606900" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:25.383033    6988 request.go:629] Waited for 180.3819ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-606900
	I0318 11:24:25.383223    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-606900
	I0318 11:24:25.383327    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:25.383327    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:25.383327    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:25.416217    6988 round_trippers.go:574] Response Status: 200 OK in 32 milliseconds
	I0318 11:24:25.589803    6988 request.go:629] Waited for 172.5306ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900
	I0318 11:24:25.589994    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900
	I0318 11:24:25.589994    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:25.590108    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:25.590108    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:25.595196    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:25.596283    6988 pod_ready.go:92] pod "kube-scheduler-ha-606900" in "kube-system" namespace has status "Ready":"True"
	I0318 11:24:25.596350    6988 pod_ready.go:81] duration metric: took 393.8644ms for pod "kube-scheduler-ha-606900" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:25.596350    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-606900-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:25.793698    6988 request.go:629] Waited for 196.8083ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-606900-m02
	I0318 11:24:25.793698    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-606900-m02
	I0318 11:24:25.793698    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:25.793698    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:25.793698    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:25.802132    6988 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 11:24:25.983603    6988 request.go:629] Waited for 180.2108ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:24:25.984089    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:24:25.984089    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:25.984089    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:25.984089    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:25.989660    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:25.989883    6988 pod_ready.go:92] pod "kube-scheduler-ha-606900-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 11:24:25.990453    6988 pod_ready.go:81] duration metric: took 394.1ms for pod "kube-scheduler-ha-606900-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:25.990453    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-606900-m03" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:26.187246    6988 request.go:629] Waited for 196.6486ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-606900-m03
	I0318 11:24:26.187541    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-606900-m03
	I0318 11:24:26.187541    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:26.187541    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:26.187541    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:26.193050    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:26.394022    6988 request.go:629] Waited for 199.003ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:26.394022    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:26.394022    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:26.394022    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:26.394022    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:26.399948    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:26.400636    6988 pod_ready.go:92] pod "kube-scheduler-ha-606900-m03" in "kube-system" namespace has status "Ready":"True"
	I0318 11:24:26.400636    6988 pod_ready.go:81] duration metric: took 410.1806ms for pod "kube-scheduler-ha-606900-m03" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:26.400636    6988 pod_ready.go:38] duration metric: took 6.8134106s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 11:24:26.400636    6988 api_server.go:52] waiting for apiserver process to appear ...
	I0318 11:24:26.413367    6988 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 11:24:26.443572    6988 api_server.go:72] duration metric: took 16.3709846s to wait for apiserver process to appear ...
	I0318 11:24:26.443717    6988 api_server.go:88] waiting for apiserver healthz status ...
	I0318 11:24:26.443776    6988 api_server.go:253] Checking apiserver healthz at https://172.25.148.74:8443/healthz ...
	I0318 11:24:26.457758    6988 api_server.go:279] https://172.25.148.74:8443/healthz returned 200:
	ok
	I0318 11:24:26.457966    6988 round_trippers.go:463] GET https://172.25.148.74:8443/version
	I0318 11:24:26.458008    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:26.458008    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:26.458008    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:26.460094    6988 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0318 11:24:26.460172    6988 api_server.go:141] control plane version: v1.28.4
	I0318 11:24:26.460252    6988 api_server.go:131] duration metric: took 16.5353ms to wait for apiserver health ...
	I0318 11:24:26.460361    6988 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 11:24:26.580196    6988 request.go:629] Waited for 119.6784ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods
	I0318 11:24:26.580196    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods
	I0318 11:24:26.580196    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:26.580676    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:26.580676    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:26.590728    6988 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0318 11:24:26.602452    6988 system_pods.go:59] 24 kube-system pods found
	I0318 11:24:26.602614    6988 system_pods.go:61] "coredns-5dd5756b68-jsf9x" [05681724-a32a-40c0-9f26-1c1eb9dffb65] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "coredns-5dd5756b68-wvh6v" [843ee0ec-fcfd-4763-8c92-acfe93bec900] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "etcd-ha-606900" [ed704c6d-aba3-496c-9988-c9f86218f1b4] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "etcd-ha-606900-m02" [a453b1e7-143c-4ea7-a1f4-f6dc6f8aa0b8] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "etcd-ha-606900-m03" [3c26d779-e97f-4226-8ae0-85ca512848cd] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "kindnet-8977g" [97e55124-90c8-4cda-854c-ee1059fafdac] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "kindnet-b68s4" [d2b7c03a-1303-4e1d-bf2b-2975716685d6] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "kindnet-xfbg7" [d871c099-0872-4d03-b1fc-4fe5554f09d1] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "kube-apiserver-ha-606900" [90f9b505-a404-4227-8a93-8d74ab235009] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "kube-apiserver-ha-606900-m02" [b3373a21-b66f-42c9-a088-97e3a86cd9fd] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "kube-apiserver-ha-606900-m03" [2a9bd19c-1d34-468f-9fd6-a82a198125eb] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "kube-controller-manager-ha-606900" [d3660558-d0d0-430f-baeb-912cef1a751f] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "kube-controller-manager-ha-606900-m02" [93c8139a-db05-4492-a62d-13ecabdadab6] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "kube-controller-manager-ha-606900-m03" [dc1f0a22-f7e8-452d-925a-cb7e628d7e65] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "kube-proxy-cjhcj" [9ab10380-cd1a-4487-9715-f82cb025149f] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "kube-proxy-fk4wg" [3b8fe48c-5035-4e97-9a79-73907e53d2ef] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "kube-proxy-s9lzf" [c0ba2c37-0dea-43c1-b2d4-ce36b6f6e9ff] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "kube-scheduler-ha-606900" [6efc4fea-f6fe-4057-96b0-fd62ba3aba5d] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "kube-scheduler-ha-606900-m02" [f1646aeb-90ea-46f7-a0f9-28b3b68f341c] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "kube-scheduler-ha-606900-m03" [657236e0-85b7-4161-866d-7892752bd59c] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "kube-vip-ha-606900" [540ec4bc-f9bc-4710-be1e-bb289e8cbea4] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "kube-vip-ha-606900-m02" [9063c185-9922-4ca7-82df-34db3af5f0be] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "kube-vip-ha-606900-m03" [7bc6ea01-5d48-405a-8023-ac6b7a3f406d] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "storage-provisioner" [d03b3748-8b89-4a55-9e0e-871a5b79532f] Running
	I0318 11:24:26.602614    6988 system_pods.go:74] duration metric: took 142.2524ms to wait for pod list to return data ...
	I0318 11:24:26.602614    6988 default_sa.go:34] waiting for default service account to be created ...
	I0318 11:24:26.785014    6988 request.go:629] Waited for 182.3985ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/default/serviceaccounts
	I0318 11:24:26.785014    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/default/serviceaccounts
	I0318 11:24:26.785014    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:26.785014    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:26.785014    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:26.791138    6988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 11:24:26.791351    6988 default_sa.go:45] found service account: "default"
	I0318 11:24:26.791426    6988 default_sa.go:55] duration metric: took 188.8111ms for default service account to be created ...
	I0318 11:24:26.791426    6988 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 11:24:26.988066    6988 request.go:629] Waited for 196.4447ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods
	I0318 11:24:26.988477    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods
	I0318 11:24:26.988477    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:26.988477    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:26.988477    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:26.999111    6988 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0318 11:24:27.010204    6988 system_pods.go:86] 24 kube-system pods found
	I0318 11:24:27.010204    6988 system_pods.go:89] "coredns-5dd5756b68-jsf9x" [05681724-a32a-40c0-9f26-1c1eb9dffb65] Running
	I0318 11:24:27.010204    6988 system_pods.go:89] "coredns-5dd5756b68-wvh6v" [843ee0ec-fcfd-4763-8c92-acfe93bec900] Running
	I0318 11:24:27.010204    6988 system_pods.go:89] "etcd-ha-606900" [ed704c6d-aba3-496c-9988-c9f86218f1b4] Running
	I0318 11:24:27.010204    6988 system_pods.go:89] "etcd-ha-606900-m02" [a453b1e7-143c-4ea7-a1f4-f6dc6f8aa0b8] Running
	I0318 11:24:27.010204    6988 system_pods.go:89] "etcd-ha-606900-m03" [3c26d779-e97f-4226-8ae0-85ca512848cd] Running
	I0318 11:24:27.010204    6988 system_pods.go:89] "kindnet-8977g" [97e55124-90c8-4cda-854c-ee1059fafdac] Running
	I0318 11:24:27.010204    6988 system_pods.go:89] "kindnet-b68s4" [d2b7c03a-1303-4e1d-bf2b-2975716685d6] Running
	I0318 11:24:27.010204    6988 system_pods.go:89] "kindnet-xfbg7" [d871c099-0872-4d03-b1fc-4fe5554f09d1] Running
	I0318 11:24:27.010204    6988 system_pods.go:89] "kube-apiserver-ha-606900" [90f9b505-a404-4227-8a93-8d74ab235009] Running
	I0318 11:24:27.010204    6988 system_pods.go:89] "kube-apiserver-ha-606900-m02" [b3373a21-b66f-42c9-a088-97e3a86cd9fd] Running
	I0318 11:24:27.010204    6988 system_pods.go:89] "kube-apiserver-ha-606900-m03" [2a9bd19c-1d34-468f-9fd6-a82a198125eb] Running
	I0318 11:24:27.010204    6988 system_pods.go:89] "kube-controller-manager-ha-606900" [d3660558-d0d0-430f-baeb-912cef1a751f] Running
	I0318 11:24:27.010204    6988 system_pods.go:89] "kube-controller-manager-ha-606900-m02" [93c8139a-db05-4492-a62d-13ecabdadab6] Running
	I0318 11:24:27.010836    6988 system_pods.go:89] "kube-controller-manager-ha-606900-m03" [dc1f0a22-f7e8-452d-925a-cb7e628d7e65] Running
	I0318 11:24:27.010836    6988 system_pods.go:89] "kube-proxy-cjhcj" [9ab10380-cd1a-4487-9715-f82cb025149f] Running
	I0318 11:24:27.010836    6988 system_pods.go:89] "kube-proxy-fk4wg" [3b8fe48c-5035-4e97-9a79-73907e53d2ef] Running
	I0318 11:24:27.010836    6988 system_pods.go:89] "kube-proxy-s9lzf" [c0ba2c37-0dea-43c1-b2d4-ce36b6f6e9ff] Running
	I0318 11:24:27.010836    6988 system_pods.go:89] "kube-scheduler-ha-606900" [6efc4fea-f6fe-4057-96b0-fd62ba3aba5d] Running
	I0318 11:24:27.010836    6988 system_pods.go:89] "kube-scheduler-ha-606900-m02" [f1646aeb-90ea-46f7-a0f9-28b3b68f341c] Running
	I0318 11:24:27.010836    6988 system_pods.go:89] "kube-scheduler-ha-606900-m03" [657236e0-85b7-4161-866d-7892752bd59c] Running
	I0318 11:24:27.010836    6988 system_pods.go:89] "kube-vip-ha-606900" [540ec4bc-f9bc-4710-be1e-bb289e8cbea4] Running
	I0318 11:24:27.011069    6988 system_pods.go:89] "kube-vip-ha-606900-m02" [9063c185-9922-4ca7-82df-34db3af5f0be] Running
	I0318 11:24:27.011069    6988 system_pods.go:89] "kube-vip-ha-606900-m03" [7bc6ea01-5d48-405a-8023-ac6b7a3f406d] Running
	I0318 11:24:27.011069    6988 system_pods.go:89] "storage-provisioner" [d03b3748-8b89-4a55-9e0e-871a5b79532f] Running
	I0318 11:24:27.011069    6988 system_pods.go:126] duration metric: took 219.6415ms to wait for k8s-apps to be running ...
	I0318 11:24:27.011069    6988 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 11:24:27.029692    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 11:24:27.061009    6988 system_svc.go:56] duration metric: took 49.9395ms WaitForService to wait for kubelet
	I0318 11:24:27.061090    6988 kubeadm.go:576] duration metric: took 16.9884992s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 11:24:27.061164    6988 node_conditions.go:102] verifying NodePressure condition ...
	I0318 11:24:27.191089    6988 request.go:629] Waited for 129.9241ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes
	I0318 11:24:27.191561    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes
	I0318 11:24:27.191636    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:27.191636    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:27.191636    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:27.198262    6988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 11:24:27.199950    6988 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 11:24:27.200011    6988 node_conditions.go:123] node cpu capacity is 2
	I0318 11:24:27.200011    6988 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 11:24:27.200011    6988 node_conditions.go:123] node cpu capacity is 2
	I0318 11:24:27.200011    6988 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 11:24:27.200011    6988 node_conditions.go:123] node cpu capacity is 2
	I0318 11:24:27.200011    6988 node_conditions.go:105] duration metric: took 138.8468ms to run NodePressure ...
	I0318 11:24:27.200093    6988 start.go:240] waiting for startup goroutines ...
	I0318 11:24:27.200093    6988 start.go:254] writing updated cluster config ...
	I0318 11:24:27.213010    6988 ssh_runner.go:195] Run: rm -f paused
	I0318 11:24:27.365985    6988 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 11:24:27.369488    6988 out.go:177] * Done! kubectl is now configured to use "ha-606900" cluster and "default" namespace by default
	
	
	==> Docker <==
	Mar 18 11:20:06 ha-606900 dockerd[1335]: time="2024-03-18T11:20:06.868141296Z" level=info msg="shim disconnected" id=dd2765df77984c0dfc0282142ddfd0048f1196709a1f6617bab6b2963418a7f1 namespace=moby
	Mar 18 11:20:06 ha-606900 dockerd[1335]: time="2024-03-18T11:20:06.869514600Z" level=warning msg="cleaning up after shim disconnected" id=dd2765df77984c0dfc0282142ddfd0048f1196709a1f6617bab6b2963418a7f1 namespace=moby
	Mar 18 11:20:06 ha-606900 dockerd[1335]: time="2024-03-18T11:20:06.869603900Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 11:20:07 ha-606900 dockerd[1335]: time="2024-03-18T11:20:07.670897266Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 11:20:07 ha-606900 dockerd[1335]: time="2024-03-18T11:20:07.671178167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 11:20:07 ha-606900 dockerd[1335]: time="2024-03-18T11:20:07.671323567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:20:07 ha-606900 dockerd[1335]: time="2024-03-18T11:20:07.671601768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:20:08 ha-606900 dockerd[1329]: time="2024-03-18T11:20:08.207457616Z" level=info msg="ignoring event" container=23a86fce80939cf98b998db897607584c302c28e79b6cda09523d78fb3250120 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 11:20:08 ha-606900 dockerd[1335]: time="2024-03-18T11:20:08.208950221Z" level=info msg="shim disconnected" id=23a86fce80939cf98b998db897607584c302c28e79b6cda09523d78fb3250120 namespace=moby
	Mar 18 11:20:08 ha-606900 dockerd[1335]: time="2024-03-18T11:20:08.209811324Z" level=warning msg="cleaning up after shim disconnected" id=23a86fce80939cf98b998db897607584c302c28e79b6cda09523d78fb3250120 namespace=moby
	Mar 18 11:20:08 ha-606900 dockerd[1335]: time="2024-03-18T11:20:08.209976424Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 11:20:08 ha-606900 dockerd[1335]: time="2024-03-18T11:20:08.666490528Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 11:20:08 ha-606900 dockerd[1335]: time="2024-03-18T11:20:08.666883529Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 11:20:08 ha-606900 dockerd[1335]: time="2024-03-18T11:20:08.667220831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:20:08 ha-606900 dockerd[1335]: time="2024-03-18T11:20:08.668023433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:25:07 ha-606900 dockerd[1335]: time="2024-03-18T11:25:07.011047680Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 11:25:07 ha-606900 dockerd[1335]: time="2024-03-18T11:25:07.011099083Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 11:25:07 ha-606900 dockerd[1335]: time="2024-03-18T11:25:07.011127884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:25:07 ha-606900 dockerd[1335]: time="2024-03-18T11:25:07.011299993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:25:07 ha-606900 cri-dockerd[1221]: time="2024-03-18T11:25:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9dca65cf65d68e7faea72379814d24b03e693167d49b285033e2f3086b06f113/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Mar 18 11:25:08 ha-606900 cri-dockerd[1221]: time="2024-03-18T11:25:08Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Mar 18 11:25:08 ha-606900 dockerd[1335]: time="2024-03-18T11:25:08.887712484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 11:25:08 ha-606900 dockerd[1335]: time="2024-03-18T11:25:08.887944987Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 11:25:08 ha-606900 dockerd[1335]: time="2024-03-18T11:25:08.887968987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:25:08 ha-606900 dockerd[1335]: time="2024-03-18T11:25:08.888631395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	42469c7975926       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   9dca65cf65d68       busybox-5b5d89c9d6-cqzzh
	4e7ce1aac9bdd       6e38f40d628db                                                                                         6 minutes ago        Running             storage-provisioner       1                   86e11d7aaf9c8       storage-provisioner
	567aff9e85a01       22aaebb38f4a9                                                                                         6 minutes ago        Running             kube-vip                  1                   fb6a851d39b23       kube-vip-ha-606900
	53cf29d4a3154       ead0a4a53df89                                                                                         10 minutes ago       Running             coredns                   0                   02eb20e1d0c5e       coredns-5dd5756b68-jsf9x
	bc7e44f9ada53       ead0a4a53df89                                                                                         10 minutes ago       Running             coredns                   0                   91f8065bd476d       coredns-5dd5756b68-wvh6v
	23a86fce80939       6e38f40d628db                                                                                         10 minutes ago       Exited              storage-provisioner       0                   86e11d7aaf9c8       storage-provisioner
	fa2d8375a385e       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              10 minutes ago       Running             kindnet-cni               0                   8ea494cab2f3d       kindnet-b68s4
	c37e249b1e7ad       83f6cc407eed8                                                                                         10 minutes ago       Running             kube-proxy                0                   92445efc60881       kube-proxy-fk4wg
	dd2765df77984       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     11 minutes ago       Exited              kube-vip                  0                   fb6a851d39b23       kube-vip-ha-606900
	d934400d5984a       d058aa5ab969c                                                                                         11 minutes ago       Running             kube-controller-manager   0                   6c00bb606d005       kube-controller-manager-ha-606900
	4befa99d2f5fe       e3db313c6dbc0                                                                                         11 minutes ago       Running             kube-scheduler            0                   69ea36325f7d7       kube-scheduler-ha-606900
	63cfa3b4e52bf       73deb9a3f7025                                                                                         11 minutes ago       Running             etcd                      0                   b537ceac57f36       etcd-ha-606900
	3851638f3614b       7fe0e6f37db33                                                                                         11 minutes ago       Running             kube-apiserver            0                   dd85a073d0853       kube-apiserver-ha-606900
	
	
	==> coredns [53cf29d4a315] <==
	[INFO] 10.244.2.2:39456 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000185302s
	[INFO] 10.244.0.4:37529 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000245603s
	[INFO] 10.244.0.4:36770 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000281103s
	[INFO] 10.244.0.4:40199 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000277203s
	[INFO] 10.244.0.4:40864 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000167901s
	[INFO] 10.244.0.4:49822 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000209402s
	[INFO] 10.244.1.3:47850 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000074901s
	[INFO] 10.244.1.3:43915 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098402s
	[INFO] 10.244.1.3:60347 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000089501s
	[INFO] 10.244.1.3:45351 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000228302s
	[INFO] 10.244.1.3:41131 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000227102s
	[INFO] 10.244.1.3:42169 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000184002s
	[INFO] 10.244.2.2:38698 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154001s
	[INFO] 10.244.2.2:35967 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000059001s
	[INFO] 10.244.2.2:49319 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075901s
	[INFO] 10.244.0.4:48977 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160201s
	[INFO] 10.244.0.4:35098 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000139301s
	[INFO] 10.244.0.4:52104 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000778s
	[INFO] 10.244.2.2:53693 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000227203s
	[INFO] 10.244.2.2:37606 - 5 "PTR IN 1.144.25.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000154802s
	[INFO] 10.244.0.4:36724 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000164502s
	[INFO] 10.244.0.4:39051 - 5 "PTR IN 1.144.25.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000063501s
	[INFO] 10.244.1.3:58746 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000222202s
	[INFO] 10.244.1.3:36273 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000325604s
	[INFO] 10.244.1.3:43340 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000649s
	
	
	==> coredns [bc7e44f9ada5] <==
	[INFO] 10.244.0.4:51884 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000206103s
	[INFO] 10.244.0.4:45821 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.092397107s
	[INFO] 10.244.1.3:57907 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000268803s
	[INFO] 10.244.1.3:57374 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000089901s
	[INFO] 10.244.2.2:41341 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105201s
	[INFO] 10.244.2.2:34889 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000220102s
	[INFO] 10.244.2.2:40850 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000212402s
	[INFO] 10.244.2.2:49832 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000463406s
	[INFO] 10.244.2.2:46881 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000192502s
	[INFO] 10.244.0.4:48306 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.014609865s
	[INFO] 10.244.0.4:37703 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.003025634s
	[INFO] 10.244.0.4:36985 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000134301s
	[INFO] 10.244.1.3:50891 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110501s
	[INFO] 10.244.1.3:50088 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000086701s
	[INFO] 10.244.2.2:38617 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000266503s
	[INFO] 10.244.0.4:39046 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000320903s
	[INFO] 10.244.1.3:50744 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000229402s
	[INFO] 10.244.1.3:60928 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000079201s
	[INFO] 10.244.1.3:49906 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000230102s
	[INFO] 10.244.1.3:48602 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000112201s
	[INFO] 10.244.2.2:37146 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129002s
	[INFO] 10.244.2.2:46050 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000120301s
	[INFO] 10.244.0.4:48098 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000266903s
	[INFO] 10.244.0.4:41403 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000123602s
	[INFO] 10.244.1.3:37883 - 5 "PTR IN 1.144.25.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000403004s
	
	
	==> describe nodes <==
	Name:               ha-606900
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-606900
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd
	                    minikube.k8s.io/name=ha-606900
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T11_15_51_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 11:15:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-606900
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 11:26:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 11:25:23 +0000   Mon, 18 Mar 2024 11:15:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 11:25:23 +0000   Mon, 18 Mar 2024 11:15:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 11:25:23 +0000   Mon, 18 Mar 2024 11:15:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 11:25:23 +0000   Mon, 18 Mar 2024 11:16:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.148.74
	  Hostname:    ha-606900
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 9864a1ae82b441db92445aa165d52416
	  System UUID:                a4deeaa1-108c-1843-84eb-dbf36f30972d
	  Boot ID:                    b831565d-b085-4bac-8a24-2cb98d43f687
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-cqzzh             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 coredns-5dd5756b68-jsf9x             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 coredns-5dd5756b68-wvh6v             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-ha-606900                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-b68s4                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-ha-606900             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-ha-606900    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-fk4wg                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-ha-606900             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-vip-ha-606900                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 10m    kube-proxy       
	  Normal  Starting                 10m    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m    kubelet          Node ha-606900 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m    kubelet          Node ha-606900 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m    kubelet          Node ha-606900 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m    node-controller  Node ha-606900 event: Registered Node ha-606900 in Controller
	  Normal  NodeReady                10m    kubelet          Node ha-606900 status is now: NodeReady
	  Normal  RegisteredNode           6m17s  node-controller  Node ha-606900 event: Registered Node ha-606900 in Controller
	  Normal  RegisteredNode           2m18s  node-controller  Node ha-606900 event: Registered Node ha-606900 in Controller
	
	
	Name:               ha-606900-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-606900-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd
	                    minikube.k8s.io/name=ha-606900
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T11_20_14_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 11:19:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-606900-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 11:26:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 11:25:31 +0000   Mon, 18 Mar 2024 11:19:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 11:25:31 +0000   Mon, 18 Mar 2024 11:19:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 11:25:31 +0000   Mon, 18 Mar 2024 11:19:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 11:25:31 +0000   Mon, 18 Mar 2024 11:20:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.148.106
	  Hostname:    ha-606900-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 ab288d88b56445ebb5abe7797a5e23d6
	  System UUID:                18ca2403-b35d-b145-a154-f437766cc0e4
	  Boot ID:                    4966d0bb-7373-40b5-bf9e-32c895219fd7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-qdlmz                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 etcd-ha-606900-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m47s
	  kube-system                 kindnet-8977g                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m48s
	  kube-system                 kube-apiserver-ha-606900-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m46s
	  kube-system                 kube-controller-manager-ha-606900-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m46s
	  kube-system                 kube-proxy-s9lzf                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m48s
	  kube-system                 kube-scheduler-ha-606900-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m46s
	  kube-system                 kube-vip-ha-606900-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        6m27s  kube-proxy       
	  Normal  RegisteredNode  6m46s  node-controller  Node ha-606900-m02 event: Registered Node ha-606900-m02 in Controller
	  Normal  RegisteredNode  6m17s  node-controller  Node ha-606900-m02 event: Registered Node ha-606900-m02 in Controller
	  Normal  RegisteredNode  2m18s  node-controller  Node ha-606900-m02 event: Registered Node ha-606900-m02 in Controller
	
	
	Name:               ha-606900-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-606900-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd
	                    minikube.k8s.io/name=ha-606900
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T11_24_09_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 11:24:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-606900-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 11:26:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 11:25:36 +0000   Mon, 18 Mar 2024 11:24:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 11:25:36 +0000   Mon, 18 Mar 2024 11:24:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 11:25:36 +0000   Mon, 18 Mar 2024 11:24:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 11:25:36 +0000   Mon, 18 Mar 2024 11:24:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.158.182
	  Hostname:    ha-606900-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 267d48d4e7d64d96be5e77ed1414dceb
	  System UUID:                f706f81a-2ac4-3d4d-a5c3-e84558596b81
	  Boot ID:                    4714b530-227e-4c69-a14d-1e68cd198ab2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-bsmjb                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 etcd-ha-606900-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-xfbg7                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m40s
	  kube-system                 kube-apiserver-ha-606900-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m39s
	  kube-system                 kube-controller-manager-ha-606900-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m39s
	  kube-system                 kube-proxy-cjhcj                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m40s
	  kube-system                 kube-scheduler-ha-606900-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m39s
	  kube-system                 kube-vip-ha-606900-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        2m34s  kube-proxy       
	  Normal  RegisteredNode  2m38s  node-controller  Node ha-606900-m03 event: Registered Node ha-606900-m03 in Controller
	  Normal  RegisteredNode  2m37s  node-controller  Node ha-606900-m03 event: Registered Node ha-606900-m03 in Controller
	  Normal  RegisteredNode  2m19s  node-controller  Node ha-606900-m03 event: Registered Node ha-606900-m03 in Controller
	
	
	==> dmesg <==
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Mar18 11:14] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.184104] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[Mar18 11:15] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +0.113546] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.575588] systemd-fstab-generator[980]: Ignoring "noauto" option for root device
	[  +0.211724] systemd-fstab-generator[992]: Ignoring "noauto" option for root device
	[  +0.231664] systemd-fstab-generator[1006]: Ignoring "noauto" option for root device
	[  +2.906269] systemd-fstab-generator[1174]: Ignoring "noauto" option for root device
	[  +0.211383] systemd-fstab-generator[1186]: Ignoring "noauto" option for root device
	[  +0.209542] systemd-fstab-generator[1198]: Ignoring "noauto" option for root device
	[  +0.297375] systemd-fstab-generator[1214]: Ignoring "noauto" option for root device
	[ +13.735575] systemd-fstab-generator[1320]: Ignoring "noauto" option for root device
	[  +0.128311] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.830205] systemd-fstab-generator[1521]: Ignoring "noauto" option for root device
	[  +6.678483] systemd-fstab-generator[1792]: Ignoring "noauto" option for root device
	[  +0.107563] kauditd_printk_skb: 73 callbacks suppressed
	[  +6.091100] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.954643] systemd-fstab-generator[2811]: Ignoring "noauto" option for root device
	[Mar18 11:16] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.548836] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.903384] kauditd_printk_skb: 29 callbacks suppressed
	[Mar18 11:19] hrtimer: interrupt took 1480705 ns
	[Mar18 11:20] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [63cfa3b4e52b] <==
	{"level":"info","ts":"2024-03-18T11:24:06.808084Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"69a8d221ec5c2dd","to":"9c726440491095bc","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-03-18T11:24:06.808177Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"9c726440491095bc"}
	{"level":"info","ts":"2024-03-18T11:24:06.808199Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"69a8d221ec5c2dd","remote-peer-id":"9c726440491095bc"}
	{"level":"info","ts":"2024-03-18T11:24:06.80896Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"69a8d221ec5c2dd","to":"9c726440491095bc","stream-type":"stream Message"}
	{"level":"info","ts":"2024-03-18T11:24:06.809064Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"69a8d221ec5c2dd","remote-peer-id":"9c726440491095bc"}
	{"level":"info","ts":"2024-03-18T11:24:06.859612Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"69a8d221ec5c2dd","remote-peer-id":"9c726440491095bc"}
	{"level":"info","ts":"2024-03-18T11:24:06.859687Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"69a8d221ec5c2dd","remote-peer-id":"9c726440491095bc"}
	{"level":"warn","ts":"2024-03-18T11:24:07.081021Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"172.25.158.182:36686","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-03-18T11:24:08.51821Z","caller":"traceutil/trace.go:171","msg":"trace[1315839753] transaction","detail":"{read_only:false; response_revision:1485; number_of_response:1; }","duration":"122.335371ms","start":"2024-03-18T11:24:08.395859Z","end":"2024-03-18T11:24:08.518194Z","steps":["trace[1315839753] 'process raft request'  (duration: 122.07217ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T11:24:10.057086Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"c4c65f858326f0d8","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"90.895781ms"}
	{"level":"warn","ts":"2024-03-18T11:24:10.057342Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"9c726440491095bc","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"91.154582ms"}
	{"level":"info","ts":"2024-03-18T11:24:10.059173Z","caller":"traceutil/trace.go:171","msg":"trace[1219373419] linearizableReadLoop","detail":"{readStateIndex:1679; appliedIndex:1679; }","duration":"122.16077ms","start":"2024-03-18T11:24:09.936896Z","end":"2024-03-18T11:24:10.059057Z","steps":["trace[1219373419] 'read index received'  (duration: 122.15687ms)","trace[1219373419] 'applied index is now lower than readState.Index'  (duration: 2.8µs)"],"step_count":2}
	{"level":"info","ts":"2024-03-18T11:24:10.062415Z","caller":"traceutil/trace.go:171","msg":"trace[256586673] transaction","detail":"{read_only:false; response_revision:1494; number_of_response:1; }","duration":"291.595883ms","start":"2024-03-18T11:24:09.770806Z","end":"2024-03-18T11:24:10.062402Z","steps":["trace[256586673] 'process raft request'  (duration: 287.488171ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T11:24:10.063379Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.526983ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:435"}
	{"level":"info","ts":"2024-03-18T11:24:10.063879Z","caller":"traceutil/trace.go:171","msg":"trace[1130725728] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:1493; }","duration":"126.595683ms","start":"2024-03-18T11:24:09.936832Z","end":"2024-03-18T11:24:10.063427Z","steps":["trace[1130725728] 'agreement among raft nodes before linearized reading'  (duration: 122.633071ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T11:24:16.058199Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"c4c65f858326f0d8","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"82.334256ms"}
	{"level":"warn","ts":"2024-03-18T11:24:16.058821Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"9c726440491095bc","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"82.474156ms"}
	{"level":"info","ts":"2024-03-18T11:25:42.380276Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":983}
	{"level":"info","ts":"2024-03-18T11:25:42.385368Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":983,"took":"3.380538ms","hash":1046364515}
	{"level":"info","ts":"2024-03-18T11:25:42.385415Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1046364515,"revision":983,"compact-revision":-1}
	{"level":"warn","ts":"2024-03-18T11:26:40.600121Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"c4c65f858326f0d8","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"55.366508ms"}
	{"level":"warn","ts":"2024-03-18T11:26:40.600189Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"9c726440491095bc","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"55.439009ms"}
	{"level":"warn","ts":"2024-03-18T11:26:40.641244Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.625727ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/\" range_end:\"/registry/persistentvolumeclaims0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-18T11:26:40.641607Z","caller":"traceutil/trace.go:171","msg":"trace[85670225] range","detail":"{range_begin:/registry/persistentvolumeclaims/; range_end:/registry/persistentvolumeclaims0; response_count:0; response_revision:1999; }","duration":"112.001331ms","start":"2024-03-18T11:26:40.529588Z","end":"2024-03-18T11:26:40.641589Z","steps":["trace[85670225] 'agreement among raft nodes before linearized reading'  (duration: 95.275547ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T11:26:40.642763Z","caller":"traceutil/trace.go:171","msg":"trace[1876442964] transaction","detail":"{read_only:false; response_revision:2000; number_of_response:1; }","duration":"205.625561ms","start":"2024-03-18T11:26:40.437126Z","end":"2024-03-18T11:26:40.642752Z","steps":["trace[1876442964] 'process raft request'  (duration: 187.676064ms)","trace[1876442964] 'compare'  (duration: 17.398891ms)"],"step_count":2}
	
	
	==> kernel <==
	 11:26:44 up 13 min,  0 users,  load average: 0.89, 0.78, 0.49
	Linux ha-606900 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [fa2d8375a385] <==
	I0318 11:26:02.115772       1 main.go:250] Node ha-606900-m03 has CIDR [10.244.2.0/24] 
	I0318 11:26:12.136335       1 main.go:223] Handling node with IPs: map[172.25.148.74:{}]
	I0318 11:26:12.136421       1 main.go:227] handling current node
	I0318 11:26:12.136436       1 main.go:223] Handling node with IPs: map[172.25.148.106:{}]
	I0318 11:26:12.136443       1 main.go:250] Node ha-606900-m02 has CIDR [10.244.1.0/24] 
	I0318 11:26:12.137149       1 main.go:223] Handling node with IPs: map[172.25.158.182:{}]
	I0318 11:26:12.137164       1 main.go:250] Node ha-606900-m03 has CIDR [10.244.2.0/24] 
	I0318 11:26:22.146888       1 main.go:223] Handling node with IPs: map[172.25.148.74:{}]
	I0318 11:26:22.147031       1 main.go:227] handling current node
	I0318 11:26:22.147052       1 main.go:223] Handling node with IPs: map[172.25.148.106:{}]
	I0318 11:26:22.147061       1 main.go:250] Node ha-606900-m02 has CIDR [10.244.1.0/24] 
	I0318 11:26:22.147688       1 main.go:223] Handling node with IPs: map[172.25.158.182:{}]
	I0318 11:26:22.147776       1 main.go:250] Node ha-606900-m03 has CIDR [10.244.2.0/24] 
	I0318 11:26:32.161021       1 main.go:223] Handling node with IPs: map[172.25.148.74:{}]
	I0318 11:26:32.161084       1 main.go:227] handling current node
	I0318 11:26:32.161099       1 main.go:223] Handling node with IPs: map[172.25.148.106:{}]
	I0318 11:26:32.161106       1 main.go:250] Node ha-606900-m02 has CIDR [10.244.1.0/24] 
	I0318 11:26:32.161274       1 main.go:223] Handling node with IPs: map[172.25.158.182:{}]
	I0318 11:26:32.161283       1 main.go:250] Node ha-606900-m03 has CIDR [10.244.2.0/24] 
	I0318 11:26:42.181199       1 main.go:223] Handling node with IPs: map[172.25.148.74:{}]
	I0318 11:26:42.181303       1 main.go:227] handling current node
	I0318 11:26:42.181319       1 main.go:223] Handling node with IPs: map[172.25.148.106:{}]
	I0318 11:26:42.181328       1 main.go:250] Node ha-606900-m02 has CIDR [10.244.1.0/24] 
	I0318 11:26:42.181680       1 main.go:223] Handling node with IPs: map[172.25.158.182:{}]
	I0318 11:26:42.181973       1 main.go:250] Node ha-606900-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [3851638f3614] <==
	I0318 11:20:11.526803       1 trace.go:236] Trace[331216583]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:2a52948d-979c-4fe8-ab56-ba7b55ad0dcf,client:172.25.148.106,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/ha-606900-m02/status,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PATCH (18-Mar-2024 11:20:06.420) (total time: 5105ms):
	Trace[331216583]: ["GuaranteedUpdate etcd3" audit-id:2a52948d-979c-4fe8-ab56-ba7b55ad0dcf,key:/minions/ha-606900-m02,type:*core.Node,resource:nodes 5105ms (11:20:06.421)
	Trace[331216583]:  ---"Txn call completed" 5047ms (11:20:11.475)
	Trace[331216583]:  ---"Txn call completed" 38ms (11:20:11.526)]
	Trace[331216583]: ---"About to apply patch" 5047ms (11:20:11.475)
	Trace[331216583]: ---"Object stored in database" 44ms (11:20:11.526)
	Trace[331216583]: [5.105742904s] [5.105742904s] END
	I0318 11:20:11.551096       1 trace.go:236] Trace[1510498771]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.25.148.74,type:*v1.Endpoints,resource:apiServerIPInfo (18-Mar-2024 11:20:10.558) (total time: 992ms):
	Trace[1510498771]: ---"initial value restored" 920ms (11:20:11.478)
	Trace[1510498771]: [992.684252ms] [992.684252ms] END
	I0318 11:20:11.551936       1 trace.go:236] Trace[295279770]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:3adfc356-f63d-4035-b3f8-429f26098f06,client:172.25.148.106,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (18-Mar-2024 11:20:04.889) (total time: 6662ms):
	Trace[295279770]: ["Create etcd3" audit-id:3adfc356-f63d-4035-b3f8-429f26098f06,key:/pods/kube-system/kube-apiserver-ha-606900-m02,type:*core.Pod,resource:pods 6650ms (11:20:04.901)
	Trace[295279770]:  ---"Txn call succeeded" 6576ms (11:20:11.477)]
	Trace[295279770]: ---"Write to database call failed" len:2996,err:pods "kube-apiserver-ha-606900-m02" already exists 73ms (11:20:11.551)
	Trace[295279770]: [6.662228694s] [6.662228694s] END
	I0318 11:20:11.552339       1 trace.go:236] Trace[2090414660]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:63837aa7-6145-4f31-a763-3603d50be3a8,client:172.25.148.106,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (18-Mar-2024 11:20:04.896) (total time: 6655ms):
	Trace[2090414660]: ["Create etcd3" audit-id:63837aa7-6145-4f31-a763-3603d50be3a8,key:/pods/kube-system/etcd-ha-606900-m02,type:*core.Pod,resource:pods 6648ms (11:20:04.903)
	Trace[2090414660]:  ---"Txn call succeeded" 6574ms (11:20:11.478)]
	Trace[2090414660]: ---"Write to database call failed" len:2213,err:pods "etcd-ha-606900-m02" already exists 74ms (11:20:11.552)
	Trace[2090414660]: [6.655519173s] [6.655519173s] END
	I0318 11:20:11.553084       1 trace.go:236] Trace[2045159223]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:22de262f-ac51-4560-ae30-43f3c6bc0778,client:172.25.148.106,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (18-Mar-2024 11:20:04.897) (total time: 6655ms):
	Trace[2045159223]: ["Create etcd3" audit-id:22de262f-ac51-4560-ae30-43f3c6bc0778,key:/pods/kube-system/kube-controller-manager-ha-606900-m02,type:*core.Pod,resource:pods 6649ms (11:20:04.903)
	Trace[2045159223]:  ---"Txn call succeeded" 6573ms (11:20:11.478)]
	Trace[2045159223]: ---"Write to database call failed" len:2375,err:pods "kube-controller-manager-ha-606900-m02" already exists 74ms (11:20:11.552)
	Trace[2045159223]: [6.655515573s] [6.655515573s] END
	
	
	==> kube-controller-manager [d934400d5984] <==
	I0318 11:25:06.025605       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="148.118647ms"
	I0318 11:25:06.086266       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="58.698482ms"
	I0318 11:25:06.230331       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="143.975715ms"
	I0318 11:25:06.378196       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: busybox-5b5d89c9d6-5dnk2"
	I0318 11:25:06.436451       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: busybox-5b5d89c9d6-x8dtl"
	I0318 11:25:06.440155       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: busybox-5b5d89c9d6-t4sxg"
	I0318 11:25:06.446909       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: busybox-5b5d89c9d6-jkwr5"
	I0318 11:25:06.447877       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: busybox-5b5d89c9d6-bc89v"
	I0318 11:25:06.447912       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: busybox-5b5d89c9d6-xp8j2"
	I0318 11:25:06.593088       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="361.877287ms"
	E0318 11:25:06.593140       1 replica_set.go:557] sync "default/busybox-5b5d89c9d6" failed with Operation cannot be fulfilled on replicasets.apps "busybox-5b5d89c9d6": the object has been modified; please apply your changes to the latest version and try again
	I0318 11:25:06.593261       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="77.604µs"
	I0318 11:25:06.598797       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="435.823µs"
	I0318 11:25:09.194109       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="63.601µs"
	I0318 11:25:09.238358       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="19.523324ms"
	I0318 11:25:09.238852       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="329.404µs"
	I0318 11:25:09.395995       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="27.262012ms"
	I0318 11:25:09.396711       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="93.901µs"
	I0318 11:25:09.929181       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="132.702µs"
	I0318 11:25:10.137005       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="64.947743ms"
	I0318 11:25:10.137514       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="439.405µs"
	I0318 11:25:39.578252       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="66.701µs"
	I0318 11:25:40.593497       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="58.601µs"
	I0318 11:25:40.621742       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="48.001µs"
	I0318 11:25:40.634734       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="117.401µs"
	
	
	==> kube-proxy [c37e249b1e7a] <==
	I0318 11:16:04.281003       1 server_others.go:69] "Using iptables proxy"
	I0318 11:16:04.298839       1 node.go:141] Successfully retrieved node IP: 172.25.148.74
	I0318 11:16:04.383562       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 11:16:04.383778       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 11:16:04.387997       1 server_others.go:152] "Using iptables Proxier"
	I0318 11:16:04.388202       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 11:16:04.388399       1 server.go:846] "Version info" version="v1.28.4"
	I0318 11:16:04.388499       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 11:16:04.390399       1 config.go:188] "Starting service config controller"
	I0318 11:16:04.390615       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 11:16:04.390978       1 config.go:97] "Starting endpoint slice config controller"
	I0318 11:16:04.391132       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 11:16:04.391922       1 config.go:315] "Starting node config controller"
	I0318 11:16:04.392004       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 11:16:04.492379       1 shared_informer.go:318] Caches are synced for service config
	I0318 11:16:04.492379       1 shared_informer.go:318] Caches are synced for node config
	I0318 11:16:04.492444       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [4befa99d2f5f] <==
	E0318 11:15:46.509501       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0318 11:15:46.509712       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 11:15:46.509773       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0318 11:15:46.563356       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 11:15:46.563414       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 11:15:46.641919       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0318 11:15:46.642056       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0318 11:15:46.747094       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0318 11:15:46.749032       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0318 11:15:46.902355       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0318 11:15:46.902529       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0318 11:15:46.966114       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 11:15:46.966290       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 11:15:47.008866       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0318 11:15:47.010165       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0318 11:15:47.013882       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 11:15:47.013908       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0318 11:15:47.052615       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0318 11:15:47.052891       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 11:15:48.642770       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 11:25:06.004315       1 cache.go:518] "Pod was added to a different node than it was assumed" podKey="1f9b2790-c02e-4e41-b946-d6272e6410fd" pod="default/busybox-5b5d89c9d6-5dnk2" assumedNode="ha-606900-m02" currentNode="ha-606900-m03"
	E0318 11:25:06.005748       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-5dnk2\": pod busybox-5b5d89c9d6-5dnk2 is already assigned to node \"ha-606900-m02\"" plugin="DefaultBinder" pod="default/busybox-5b5d89c9d6-5dnk2" node="ha-606900-m03"
	E0318 11:25:06.007616       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 1f9b2790-c02e-4e41-b946-d6272e6410fd(default/busybox-5b5d89c9d6-5dnk2) was assumed on ha-606900-m03 but assigned to ha-606900-m02"
	E0318 11:25:06.011953       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-5dnk2\": pod busybox-5b5d89c9d6-5dnk2 is already assigned to node \"ha-606900-m02\"" pod="default/busybox-5b5d89c9d6-5dnk2"
	I0318 11:25:06.012609       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-5b5d89c9d6-5dnk2" node="ha-606900-m02"
	
	
	==> kubelet <==
	Mar 18 11:21:49 ha-606900 kubelet[2850]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 11:21:49 ha-606900 kubelet[2850]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 11:21:49 ha-606900 kubelet[2850]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 11:22:49 ha-606900 kubelet[2850]: E0318 11:22:49.941076    2850 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 11:22:49 ha-606900 kubelet[2850]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 11:22:49 ha-606900 kubelet[2850]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 11:22:49 ha-606900 kubelet[2850]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 11:22:49 ha-606900 kubelet[2850]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 11:23:49 ha-606900 kubelet[2850]: E0318 11:23:49.941144    2850 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 11:23:49 ha-606900 kubelet[2850]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 11:23:49 ha-606900 kubelet[2850]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 11:23:49 ha-606900 kubelet[2850]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 11:23:49 ha-606900 kubelet[2850]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 11:24:49 ha-606900 kubelet[2850]: E0318 11:24:49.948615    2850 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 11:24:49 ha-606900 kubelet[2850]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 11:24:49 ha-606900 kubelet[2850]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 11:24:49 ha-606900 kubelet[2850]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 11:24:49 ha-606900 kubelet[2850]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 11:25:06 ha-606900 kubelet[2850]: I0318 11:25:06.026364    2850 topology_manager.go:215] "Topology Admit Handler" podUID="4f73a0f8-81c6-481e-a9bf-9a78224cf41d" podNamespace="default" podName="busybox-5b5d89c9d6-cqzzh"
	Mar 18 11:25:06 ha-606900 kubelet[2850]: I0318 11:25:06.224871    2850 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mggmc\" (UniqueName: \"kubernetes.io/projected/4f73a0f8-81c6-481e-a9bf-9a78224cf41d-kube-api-access-mggmc\") pod \"busybox-5b5d89c9d6-cqzzh\" (UID: \"4f73a0f8-81c6-481e-a9bf-9a78224cf41d\") " pod="default/busybox-5b5d89c9d6-cqzzh"
	Mar 18 11:25:49 ha-606900 kubelet[2850]: E0318 11:25:49.950204    2850 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 11:25:49 ha-606900 kubelet[2850]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 11:25:49 ha-606900 kubelet[2850]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 11:25:49 ha-606900 kubelet[2850]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 11:25:49 ha-606900 kubelet[2850]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 11:26:35.431201    4848 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-606900 -n ha-606900
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-606900 -n ha-606900: (12.7926549s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-606900 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (70.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (680.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 status --output json -v=7 --alsologtostderr
E0318 11:32:21.960275    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
ha_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 status --output json -v=7 --alsologtostderr: (50.4383424s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 cp testdata\cp-test.txt ha-606900:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 cp testdata\cp-test.txt ha-606900:/home/docker/cp-test.txt: (10.0107324s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900 "sudo cat /home/docker/cp-test.txt": (9.8208212s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 cp ha-606900:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile604163901\001\cp-test_ha-606900.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 cp ha-606900:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile604163901\001\cp-test_ha-606900.txt: (9.8360989s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900 "sudo cat /home/docker/cp-test.txt": (9.9943524s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 cp ha-606900:/home/docker/cp-test.txt ha-606900-m02:/home/docker/cp-test_ha-606900_ha-606900-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 cp ha-606900:/home/docker/cp-test.txt ha-606900-m02:/home/docker/cp-test_ha-606900_ha-606900-m02.txt: (17.2348469s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900 "sudo cat /home/docker/cp-test.txt": (10.1697122s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m02 "sudo cat /home/docker/cp-test_ha-606900_ha-606900-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m02 "sudo cat /home/docker/cp-test_ha-606900_ha-606900-m02.txt": (10.0290004s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 cp ha-606900:/home/docker/cp-test.txt ha-606900-m03:/home/docker/cp-test_ha-606900_ha-606900-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 cp ha-606900:/home/docker/cp-test.txt ha-606900-m03:/home/docker/cp-test_ha-606900_ha-606900-m03.txt: (17.3501696s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900 "sudo cat /home/docker/cp-test.txt": (10.0356029s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m03 "sudo cat /home/docker/cp-test_ha-606900_ha-606900-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m03 "sudo cat /home/docker/cp-test_ha-606900_ha-606900-m03.txt": (9.9603957s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 cp ha-606900:/home/docker/cp-test.txt ha-606900-m04:/home/docker/cp-test_ha-606900_ha-606900-m04.txt
E0318 11:34:49.610805    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 cp ha-606900:/home/docker/cp-test.txt ha-606900-m04:/home/docker/cp-test_ha-606900_ha-606900-m04.txt: (17.4299776s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900 "sudo cat /home/docker/cp-test.txt": (9.8690178s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m04 "sudo cat /home/docker/cp-test_ha-606900_ha-606900-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m04 "sudo cat /home/docker/cp-test_ha-606900_ha-606900-m04.txt": (9.9368846s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 cp testdata\cp-test.txt ha-606900-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 cp testdata\cp-test.txt ha-606900-m02:/home/docker/cp-test.txt: (9.8745075s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m02 "sudo cat /home/docker/cp-test.txt": (9.9018736s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 cp ha-606900-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile604163901\001\cp-test_ha-606900-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 cp ha-606900-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile604163901\001\cp-test_ha-606900-m02.txt: (9.9142828s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m02 "sudo cat /home/docker/cp-test.txt": (10.0584572s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 cp ha-606900-m02:/home/docker/cp-test.txt ha-606900:/home/docker/cp-test_ha-606900-m02_ha-606900.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 cp ha-606900-m02:/home/docker/cp-test.txt ha-606900:/home/docker/cp-test_ha-606900-m02_ha-606900.txt: (17.2007825s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m02 "sudo cat /home/docker/cp-test.txt": (9.948399s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900 "sudo cat /home/docker/cp-test_ha-606900-m02_ha-606900.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900 "sudo cat /home/docker/cp-test_ha-606900-m02_ha-606900.txt": (10.0069314s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 cp ha-606900-m02:/home/docker/cp-test.txt ha-606900-m03:/home/docker/cp-test_ha-606900-m02_ha-606900-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 cp ha-606900-m02:/home/docker/cp-test.txt ha-606900-m03:/home/docker/cp-test_ha-606900-m02_ha-606900-m03.txt: (17.4200284s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m02 "sudo cat /home/docker/cp-test.txt": (9.9470767s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m03 "sudo cat /home/docker/cp-test_ha-606900-m02_ha-606900-m03.txt"
E0318 11:37:05.161882    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m03 "sudo cat /home/docker/cp-test_ha-606900-m02_ha-606900-m03.txt": (9.8632549s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 cp ha-606900-m02:/home/docker/cp-test.txt ha-606900-m04:/home/docker/cp-test_ha-606900-m02_ha-606900-m04.txt
E0318 11:37:21.956527    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 cp ha-606900-m02:/home/docker/cp-test.txt ha-606900-m04:/home/docker/cp-test_ha-606900-m02_ha-606900-m04.txt: (17.3364523s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m02 "sudo cat /home/docker/cp-test.txt": (9.8673925s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m04 "sudo cat /home/docker/cp-test_ha-606900-m02_ha-606900-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m04 "sudo cat /home/docker/cp-test_ha-606900-m02_ha-606900-m04.txt": (9.9579673s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 cp testdata\cp-test.txt ha-606900-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 cp testdata\cp-test.txt ha-606900-m03:/home/docker/cp-test.txt: (9.8994624s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m03 "sudo cat /home/docker/cp-test.txt": (9.9420045s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 cp ha-606900-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile604163901\001\cp-test_ha-606900-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 cp ha-606900-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile604163901\001\cp-test_ha-606900-m03.txt: (9.9143979s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m03 "sudo cat /home/docker/cp-test.txt": (9.9428496s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 cp ha-606900-m03:/home/docker/cp-test.txt ha-606900:/home/docker/cp-test_ha-606900-m03_ha-606900.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 cp ha-606900-m03:/home/docker/cp-test.txt ha-606900:/home/docker/cp-test_ha-606900-m03_ha-606900.txt: (17.2644775s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m03 "sudo cat /home/docker/cp-test.txt": (9.9422839s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900 "sudo cat /home/docker/cp-test_ha-606900-m03_ha-606900.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900 "sudo cat /home/docker/cp-test_ha-606900-m03_ha-606900.txt": (9.9365077s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 cp ha-606900-m03:/home/docker/cp-test.txt ha-606900-m02:/home/docker/cp-test_ha-606900-m03_ha-606900-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 cp ha-606900-m03:/home/docker/cp-test.txt ha-606900-m02:/home/docker/cp-test_ha-606900-m03_ha-606900-m02.txt: (17.3320011s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m03 "sudo cat /home/docker/cp-test.txt": (9.8959916s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m02 "sudo cat /home/docker/cp-test_ha-606900-m03_ha-606900-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m02 "sudo cat /home/docker/cp-test_ha-606900-m03_ha-606900-m02.txt": (9.8754132s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 cp ha-606900-m03:/home/docker/cp-test.txt ha-606900-m04:/home/docker/cp-test_ha-606900-m03_ha-606900-m04.txt
E0318 11:39:49.607039    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 cp ha-606900-m03:/home/docker/cp-test.txt ha-606900-m04:/home/docker/cp-test_ha-606900-m03_ha-606900-m04.txt: (17.3209654s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m03 "sudo cat /home/docker/cp-test.txt": (9.8526685s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m04 "sudo cat /home/docker/cp-test_ha-606900-m03_ha-606900-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m04 "sudo cat /home/docker/cp-test_ha-606900-m03_ha-606900-m04.txt": (9.9284572s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 cp testdata\cp-test.txt ha-606900-m04:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 cp testdata\cp-test.txt ha-606900-m04:/home/docker/cp-test.txt: (9.9645008s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m04 "sudo cat /home/docker/cp-test.txt": (9.8452834s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 cp ha-606900-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile604163901\001\cp-test_ha-606900-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 cp ha-606900-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile604163901\001\cp-test_ha-606900-m04.txt: (9.9952722s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m04 "sudo cat /home/docker/cp-test.txt": (9.971323s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 cp ha-606900-m04:/home/docker/cp-test.txt ha-606900:/home/docker/cp-test_ha-606900-m04_ha-606900.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 cp ha-606900-m04:/home/docker/cp-test.txt ha-606900:/home/docker/cp-test_ha-606900-m04_ha-606900.txt: (17.2830655s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m04 "sudo cat /home/docker/cp-test.txt": (9.9892142s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900 "sudo cat /home/docker/cp-test_ha-606900-m04_ha-606900.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900 "sudo cat /home/docker/cp-test_ha-606900-m04_ha-606900.txt": (9.8630854s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 cp ha-606900-m04:/home/docker/cp-test.txt ha-606900-m02:/home/docker/cp-test_ha-606900-m04_ha-606900-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 cp ha-606900-m04:/home/docker/cp-test.txt ha-606900-m02:/home/docker/cp-test_ha-606900-m04_ha-606900-m02.txt: (17.1923824s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m04 "sudo cat /home/docker/cp-test.txt": (9.8404606s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m02 "sudo cat /home/docker/cp-test_ha-606900-m04_ha-606900-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m02 "sudo cat /home/docker/cp-test_ha-606900-m04_ha-606900-m02.txt": (9.9908954s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 cp ha-606900-m04:/home/docker/cp-test.txt ha-606900-m03:/home/docker/cp-test_ha-606900-m04_ha-606900-m03.txt
E0318 11:42:21.958831    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 cp ha-606900-m04:/home/docker/cp-test.txt ha-606900-m03:/home/docker/cp-test_ha-606900-m04_ha-606900-m03.txt: (17.3133125s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m04 "sudo cat /home/docker/cp-test.txt": exit status 1 (7.9503292s)

                                                
                                                
** stderr ** 
	W0318 11:42:27.839340    8524 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:536: failed to run command by deadline. exceeded timeout : out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:539: failed to run an cp command. args "out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m04 \"sudo cat /home/docker/cp-test.txt\"" : exit status 1
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m03 "sudo cat /home/docker/cp-test_ha-606900-m04_ha-606900-m03.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m03 "sudo cat /home/docker/cp-test_ha-606900-m04_ha-606900-m03.txt": context deadline exceeded (0s)
helpers_test.go:536: failed to run command by deadline. exceeded timeout : out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m03 "sudo cat /home/docker/cp-test_ha-606900-m04_ha-606900-m03.txt"
helpers_test.go:539: failed to run an cp command. args "out/minikube-windows-amd64.exe -p ha-606900 ssh -n ha-606900-m03 \"sudo cat /home/docker/cp-test_ha-606900-m04_ha-606900-m03.txt\"" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-606900 -n ha-606900
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-606900 -n ha-606900: (13.0238786s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 logs -n 25: (9.2790301s)
helpers_test.go:252: TestMultiControlPlane/serial/CopyFile logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| cp      | ha-606900 cp testdata\cp-test.txt                                                                                        | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:37 UTC | 18 Mar 24 11:37 UTC |
	|         | ha-606900-m03:/home/docker/cp-test.txt                                                                                   |           |                   |         |                     |                     |
	| ssh     | ha-606900 ssh -n                                                                                                         | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:37 UTC | 18 Mar 24 11:38 UTC |
	|         | ha-606900-m03 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| cp      | ha-606900 cp ha-606900-m03:/home/docker/cp-test.txt                                                                      | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:38 UTC | 18 Mar 24 11:38 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile604163901\001\cp-test_ha-606900-m03.txt |           |                   |         |                     |                     |
	| ssh     | ha-606900 ssh -n                                                                                                         | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:38 UTC | 18 Mar 24 11:38 UTC |
	|         | ha-606900-m03 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| cp      | ha-606900 cp ha-606900-m03:/home/docker/cp-test.txt                                                                      | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:38 UTC | 18 Mar 24 11:38 UTC |
	|         | ha-606900:/home/docker/cp-test_ha-606900-m03_ha-606900.txt                                                               |           |                   |         |                     |                     |
	| ssh     | ha-606900 ssh -n                                                                                                         | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:38 UTC | 18 Mar 24 11:38 UTC |
	|         | ha-606900-m03 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-606900 ssh -n ha-606900 sudo cat                                                                                      | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:38 UTC | 18 Mar 24 11:39 UTC |
	|         | /home/docker/cp-test_ha-606900-m03_ha-606900.txt                                                                         |           |                   |         |                     |                     |
	| cp      | ha-606900 cp ha-606900-m03:/home/docker/cp-test.txt                                                                      | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:39 UTC | 18 Mar 24 11:39 UTC |
	|         | ha-606900-m02:/home/docker/cp-test_ha-606900-m03_ha-606900-m02.txt                                                       |           |                   |         |                     |                     |
	| ssh     | ha-606900 ssh -n                                                                                                         | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:39 UTC | 18 Mar 24 11:39 UTC |
	|         | ha-606900-m03 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-606900 ssh -n ha-606900-m02 sudo cat                                                                                  | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:39 UTC | 18 Mar 24 11:39 UTC |
	|         | /home/docker/cp-test_ha-606900-m03_ha-606900-m02.txt                                                                     |           |                   |         |                     |                     |
	| cp      | ha-606900 cp ha-606900-m03:/home/docker/cp-test.txt                                                                      | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:39 UTC | 18 Mar 24 11:39 UTC |
	|         | ha-606900-m04:/home/docker/cp-test_ha-606900-m03_ha-606900-m04.txt                                                       |           |                   |         |                     |                     |
	| ssh     | ha-606900 ssh -n                                                                                                         | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:39 UTC | 18 Mar 24 11:40 UTC |
	|         | ha-606900-m03 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-606900 ssh -n ha-606900-m04 sudo cat                                                                                  | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:40 UTC | 18 Mar 24 11:40 UTC |
	|         | /home/docker/cp-test_ha-606900-m03_ha-606900-m04.txt                                                                     |           |                   |         |                     |                     |
	| cp      | ha-606900 cp testdata\cp-test.txt                                                                                        | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:40 UTC | 18 Mar 24 11:40 UTC |
	|         | ha-606900-m04:/home/docker/cp-test.txt                                                                                   |           |                   |         |                     |                     |
	| ssh     | ha-606900 ssh -n                                                                                                         | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:40 UTC | 18 Mar 24 11:40 UTC |
	|         | ha-606900-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| cp      | ha-606900 cp ha-606900-m04:/home/docker/cp-test.txt                                                                      | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:40 UTC | 18 Mar 24 11:40 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile604163901\001\cp-test_ha-606900-m04.txt |           |                   |         |                     |                     |
	| ssh     | ha-606900 ssh -n                                                                                                         | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:40 UTC | 18 Mar 24 11:40 UTC |
	|         | ha-606900-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| cp      | ha-606900 cp ha-606900-m04:/home/docker/cp-test.txt                                                                      | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:40 UTC | 18 Mar 24 11:41 UTC |
	|         | ha-606900:/home/docker/cp-test_ha-606900-m04_ha-606900.txt                                                               |           |                   |         |                     |                     |
	| ssh     | ha-606900 ssh -n                                                                                                         | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:41 UTC | 18 Mar 24 11:41 UTC |
	|         | ha-606900-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-606900 ssh -n ha-606900 sudo cat                                                                                      | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:41 UTC | 18 Mar 24 11:41 UTC |
	|         | /home/docker/cp-test_ha-606900-m04_ha-606900.txt                                                                         |           |                   |         |                     |                     |
	| cp      | ha-606900 cp ha-606900-m04:/home/docker/cp-test.txt                                                                      | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:41 UTC | 18 Mar 24 11:41 UTC |
	|         | ha-606900-m02:/home/docker/cp-test_ha-606900-m04_ha-606900-m02.txt                                                       |           |                   |         |                     |                     |
	| ssh     | ha-606900 ssh -n                                                                                                         | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:41 UTC | 18 Mar 24 11:42 UTC |
	|         | ha-606900-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-606900 ssh -n ha-606900-m02 sudo cat                                                                                  | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:42 UTC | 18 Mar 24 11:42 UTC |
	|         | /home/docker/cp-test_ha-606900-m04_ha-606900-m02.txt                                                                     |           |                   |         |                     |                     |
	| cp      | ha-606900 cp ha-606900-m04:/home/docker/cp-test.txt                                                                      | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:42 UTC | 18 Mar 24 11:42 UTC |
	|         | ha-606900-m03:/home/docker/cp-test_ha-606900-m04_ha-606900-m03.txt                                                       |           |                   |         |                     |                     |
	| ssh     | ha-606900 ssh -n                                                                                                         | ha-606900 | minikube6\jenkins | v1.32.0 | 18 Mar 24 11:42 UTC |                     |
	|         | ha-606900-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 11:12:35
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 11:12:35.830652    6988 out.go:291] Setting OutFile to fd 880 ...
	I0318 11:12:35.830652    6988 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 11:12:35.830652    6988 out.go:304] Setting ErrFile to fd 1084...
	I0318 11:12:35.830652    6988 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 11:12:35.853346    6988 out.go:298] Setting JSON to false
	I0318 11:12:35.856650    6988 start.go:129] hostinfo: {"hostname":"minikube6","uptime":135680,"bootTime":1710624675,"procs":186,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0318 11:12:35.856650    6988 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 11:12:35.862951    6988 out.go:177] * [ha-606900] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	I0318 11:12:35.869346    6988 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0318 11:12:35.869346    6988 notify.go:220] Checking for updates...
	I0318 11:12:35.872190    6988 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 11:12:35.874973    6988 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0318 11:12:35.877697    6988 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 11:12:35.879630    6988 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 11:12:35.883006    6988 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 11:12:41.331610    6988 out.go:177] * Using the hyperv driver based on user configuration
	I0318 11:12:41.336169    6988 start.go:297] selected driver: hyperv
	I0318 11:12:41.336169    6988 start.go:901] validating driver "hyperv" against <nil>
	I0318 11:12:41.336169    6988 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 11:12:41.386045    6988 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 11:12:41.387380    6988 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 11:12:41.387380    6988 cni.go:84] Creating CNI manager for ""
	I0318 11:12:41.387380    6988 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0318 11:12:41.387380    6988 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0318 11:12:41.388101    6988 start.go:340] cluster config:
	{Name:ha-606900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-606900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 11:12:41.388476    6988 iso.go:125] acquiring lock: {Name:mk859ea173f7c19f70b69d7017f4a5a661cd1500 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 11:12:41.391613    6988 out.go:177] * Starting "ha-606900" primary control-plane node in "ha-606900" cluster
	I0318 11:12:41.395867    6988 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 11:12:41.396127    6988 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0318 11:12:41.396200    6988 cache.go:56] Caching tarball of preloaded images
	I0318 11:12:41.396548    6988 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0318 11:12:41.396728    6988 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 11:12:41.396935    6988 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\config.json ...
	I0318 11:12:41.397498    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\config.json: {Name:mk88c122c030bbdaff9f17f92b0a3b058cc8268e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:12:41.398473    6988 start.go:360] acquireMachinesLock for ha-606900: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 11:12:41.398473    6988 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-606900"
	I0318 11:12:41.399069    6988 start.go:93] Provisioning new machine with config: &{Name:ha-606900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.28.4 ClusterName:ha-606900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 11:12:41.399214    6988 start.go:125] createHost starting for "" (driver="hyperv")
	I0318 11:12:41.401877    6988 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 11:12:41.401877    6988 start.go:159] libmachine.API.Create for "ha-606900" (driver="hyperv")
	I0318 11:12:41.401877    6988 client.go:168] LocalClient.Create starting
	I0318 11:12:41.402715    6988 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0318 11:12:41.402897    6988 main.go:141] libmachine: Decoding PEM data...
	I0318 11:12:41.402897    6988 main.go:141] libmachine: Parsing certificate...
	I0318 11:12:41.403172    6988 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0318 11:12:41.403172    6988 main.go:141] libmachine: Decoding PEM data...
	I0318 11:12:41.403172    6988 main.go:141] libmachine: Parsing certificate...
	I0318 11:12:41.403172    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0318 11:12:43.540920    6988 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0318 11:12:43.540920    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:12:43.541042    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0318 11:12:45.362820    6988 main.go:141] libmachine: [stdout =====>] : False
	
	I0318 11:12:45.362820    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:12:45.362820    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0318 11:12:46.887627    6988 main.go:141] libmachine: [stdout =====>] : True
	
	I0318 11:12:46.888371    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:12:46.888493    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0318 11:12:50.575417    6988 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0318 11:12:50.575417    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:12:50.578640    6988 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0318 11:12:51.070101    6988 main.go:141] libmachine: Creating SSH key...
	I0318 11:12:51.160792    6988 main.go:141] libmachine: Creating VM...
	I0318 11:12:51.160792    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0318 11:12:54.035165    6988 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0318 11:12:54.036083    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:12:54.036083    6988 main.go:141] libmachine: Using switch "Default Switch"
	I0318 11:12:54.036083    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0318 11:12:55.837167    6988 main.go:141] libmachine: [stdout =====>] : True
	
	I0318 11:12:56.006013    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:12:56.006271    6988 main.go:141] libmachine: Creating VHD
	I0318 11:12:56.006425    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900\fixed.vhd' -SizeBytes 10MB -Fixed
	I0318 11:12:59.751439    6988 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 01C96E1C-CEF4-4C5C-B17A-6C2519E24BE7
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0318 11:12:59.751439    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:12:59.751439    6988 main.go:141] libmachine: Writing magic tar header
	I0318 11:12:59.752231    6988 main.go:141] libmachine: Writing SSH key tar header
	I0318 11:12:59.764120    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900\disk.vhd' -VHDType Dynamic -DeleteSource
	I0318 11:13:02.936326    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:13:02.936418    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:13:02.936507    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900\disk.vhd' -SizeBytes 20000MB
	I0318 11:13:05.559857    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:13:05.559857    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:13:05.560408    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-606900 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0318 11:13:09.253956    6988 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-606900 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0318 11:13:09.253956    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:13:09.253956    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-606900 -DynamicMemoryEnabled $false
	I0318 11:13:11.531707    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:13:11.531760    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:13:11.531760    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-606900 -Count 2
	I0318 11:13:13.743011    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:13:13.743791    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:13:13.743791    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-606900 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900\boot2docker.iso'
	I0318 11:13:16.406409    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:13:16.406409    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:13:16.406409    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-606900 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900\disk.vhd'
	I0318 11:13:19.101952    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:13:19.102435    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:13:19.102435    6988 main.go:141] libmachine: Starting VM...
	I0318 11:13:19.102637    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-606900
	I0318 11:13:22.206846    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:13:22.206846    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:13:22.206846    6988 main.go:141] libmachine: Waiting for host to start...
	I0318 11:13:22.206846    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:13:24.475079    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:13:24.476078    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:13:24.476078    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:13:27.080284    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:13:27.080426    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:13:28.089986    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:13:30.316981    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:13:30.316981    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:13:30.316981    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:13:32.880486    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:13:32.880563    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:13:33.887545    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:13:36.163932    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:13:36.164049    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:13:36.164114    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:13:38.689956    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:13:38.689956    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:13:39.701591    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:13:41.948552    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:13:41.948843    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:13:41.948843    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:13:44.585824    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:13:44.585824    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:13:45.586379    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:13:47.814899    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:13:47.814899    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:13:47.815571    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:13:50.508581    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:13:50.508581    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:13:50.509682    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:13:52.688509    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:13:52.688509    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:13:52.688509    6988 machine.go:94] provisionDockerMachine start ...
	I0318 11:13:52.689070    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:13:54.855211    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:13:54.855211    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:13:54.855211    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:13:57.437874    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:13:57.437874    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:13:57.444710    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:13:57.455227    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.74 22 <nil> <nil>}
	I0318 11:13:57.455227    6988 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 11:13:57.572975    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 11:13:57.573099    6988 buildroot.go:166] provisioning hostname "ha-606900"
	I0318 11:13:57.573200    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:13:59.696497    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:13:59.696497    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:13:59.697094    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:14:02.260379    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:14:02.260379    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:02.268333    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:14:02.269249    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.74 22 <nil> <nil>}
	I0318 11:14:02.269249    6988 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-606900 && echo "ha-606900" | sudo tee /etc/hostname
	I0318 11:14:02.422346    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-606900
	
	I0318 11:14:02.422560    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:14:04.581810    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:14:04.581948    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:04.581948    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:14:07.223238    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:14:07.223756    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:07.230356    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:14:07.230898    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.74 22 <nil> <nil>}
	I0318 11:14:07.230898    6988 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-606900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-606900/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-606900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 11:14:07.368603    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 11:14:07.368603    6988 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0318 11:14:07.368603    6988 buildroot.go:174] setting up certificates
	I0318 11:14:07.368603    6988 provision.go:84] configureAuth start
	I0318 11:14:07.368603    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:14:09.530602    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:14:09.530602    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:09.530602    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:14:12.097818    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:14:12.098191    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:12.098191    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:14:14.328640    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:14:14.328843    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:14.328975    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:14:16.982410    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:14:16.983030    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:16.983091    6988 provision.go:143] copyHostCerts
	I0318 11:14:16.983091    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0318 11:14:16.983091    6988 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0318 11:14:16.983091    6988 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0318 11:14:16.983876    6988 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0318 11:14:16.984546    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0318 11:14:16.985138    6988 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0318 11:14:16.985138    6988 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0318 11:14:16.985138    6988 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0318 11:14:16.986267    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0318 11:14:16.986832    6988 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0318 11:14:16.986832    6988 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0318 11:14:16.986973    6988 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0318 11:14:16.987907    6988 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-606900 san=[127.0.0.1 172.25.148.74 ha-606900 localhost minikube]
	I0318 11:14:17.067089    6988 provision.go:177] copyRemoteCerts
	I0318 11:14:17.079889    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 11:14:17.080000    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:14:19.266914    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:14:19.266914    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:19.267473    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:14:21.882070    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:14:21.882180    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:21.882601    6988 sshutil.go:53] new ssh client: &{IP:172.25.148.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900\id_rsa Username:docker}
	I0318 11:14:21.996499    6988 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9164682s)
	I0318 11:14:21.996499    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0318 11:14:21.997099    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 11:14:22.046399    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0318 11:14:22.047382    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 11:14:22.097218    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0318 11:14:22.098124    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1196 bytes)
	I0318 11:14:22.145565    6988 provision.go:87] duration metric: took 14.7767048s to configureAuth
	I0318 11:14:22.145565    6988 buildroot.go:189] setting minikube options for container-runtime
	I0318 11:14:22.145741    6988 config.go:182] Loaded profile config "ha-606900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:14:22.145741    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:14:24.325007    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:14:24.325348    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:24.325444    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:14:26.942633    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:14:26.942633    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:26.947961    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:14:26.948851    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.74 22 <nil> <nil>}
	I0318 11:14:26.948851    6988 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0318 11:14:27.084291    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0318 11:14:27.084380    6988 buildroot.go:70] root file system type: tmpfs
	I0318 11:14:27.084669    6988 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0318 11:14:27.084757    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:14:29.264363    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:14:29.265231    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:29.265403    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:14:31.873428    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:14:31.874150    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:31.880927    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:14:31.880927    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.74 22 <nil> <nil>}
	I0318 11:14:31.881633    6988 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0318 11:14:32.039753    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0318 11:14:32.039753    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:14:34.224968    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:14:34.224968    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:34.225711    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:14:36.823813    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:14:36.823813    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:36.829884    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:14:36.830627    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.74 22 <nil> <nil>}
	I0318 11:14:36.830627    6988 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0318 11:14:39.056402    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0318 11:14:39.056402    6988 machine.go:97] duration metric: took 46.3676038s to provisionDockerMachine
	I0318 11:14:39.056402    6988 client.go:171] duration metric: took 1m57.6537933s to LocalClient.Create
	I0318 11:14:39.056402    6988 start.go:167] duration metric: took 1m57.6537933s to libmachine.API.Create "ha-606900"
	I0318 11:14:39.056402    6988 start.go:293] postStartSetup for "ha-606900" (driver="hyperv")
	I0318 11:14:39.056402    6988 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 11:14:39.072210    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 11:14:39.072210    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:14:41.195630    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:14:41.195630    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:41.196046    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:14:43.797959    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:14:43.797959    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:43.798700    6988 sshutil.go:53] new ssh client: &{IP:172.25.148.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900\id_rsa Username:docker}
	I0318 11:14:43.899850    6988 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8276095s)
	I0318 11:14:43.910795    6988 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 11:14:43.918063    6988 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 11:14:43.918063    6988 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0318 11:14:43.918766    6988 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0318 11:14:43.919879    6988 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem -> 91202.pem in /etc/ssl/certs
	I0318 11:14:43.919953    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem -> /etc/ssl/certs/91202.pem
	I0318 11:14:43.931771    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 11:14:43.950302    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem --> /etc/ssl/certs/91202.pem (1708 bytes)
	I0318 11:14:43.998543    6988 start.go:296] duration metric: took 4.9421102s for postStartSetup
	I0318 11:14:44.001483    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:14:46.189672    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:14:46.189672    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:46.189931    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:14:48.760889    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:14:48.760889    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:48.760889    6988 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\config.json ...
	I0318 11:14:48.764149    6988 start.go:128] duration metric: took 2m7.3640319s to createHost
	I0318 11:14:48.764243    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:14:50.921725    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:14:50.922186    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:50.922267    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:14:53.458294    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:14:53.458294    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:53.463531    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:14:53.464054    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.74 22 <nil> <nil>}
	I0318 11:14:53.464266    6988 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 11:14:53.601918    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710760493.600160434
	
	I0318 11:14:53.602033    6988 fix.go:216] guest clock: 1710760493.600160434
	I0318 11:14:53.602033    6988 fix.go:229] Guest: 2024-03-18 11:14:53.600160434 +0000 UTC Remote: 2024-03-18 11:14:48.7641493 +0000 UTC m=+133.117652501 (delta=4.836011134s)
	I0318 11:14:53.602159    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:14:55.731159    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:14:55.731159    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:55.731888    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:14:58.324411    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:14:58.324411    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:14:58.330605    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:14:58.330781    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.74 22 <nil> <nil>}
	I0318 11:14:58.330781    6988 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1710760493
	I0318 11:14:58.467711    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Mar 18 11:14:53 UTC 2024
	
	I0318 11:14:58.467711    6988 fix.go:236] clock set: Mon Mar 18 11:14:53 UTC 2024
	 (err=<nil>)
	I0318 11:14:58.467711    6988 start.go:83] releasing machines lock for "ha-606900", held for 2m17.0683849s
	I0318 11:14:58.468253    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:15:00.626638    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:15:00.626638    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:15:00.627358    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:15:03.172558    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:15:03.172737    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:15:03.176407    6988 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 11:15:03.176944    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:15:03.187563    6988 ssh_runner.go:195] Run: cat /version.json
	I0318 11:15:03.187563    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:15:05.429669    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:15:05.430027    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:15:05.429669    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:15:05.430027    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:15:05.430027    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:15:05.430027    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:15:08.164238    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:15:08.164425    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:15:08.165138    6988 sshutil.go:53] new ssh client: &{IP:172.25.148.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900\id_rsa Username:docker}
	I0318 11:15:08.192916    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:15:08.192916    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:15:08.192916    6988 sshutil.go:53] new ssh client: &{IP:172.25.148.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900\id_rsa Username:docker}
	I0318 11:15:08.378416    6988 ssh_runner.go:235] Completed: cat /version.json: (5.1908208s)
	I0318 11:15:08.378416    6988 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2019766s)
	I0318 11:15:08.392455    6988 ssh_runner.go:195] Run: systemctl --version
	I0318 11:15:08.415790    6988 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 11:15:08.426446    6988 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 11:15:08.437897    6988 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 11:15:08.470268    6988 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 11:15:08.470268    6988 start.go:494] detecting cgroup driver to use...
	I0318 11:15:08.470268    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 11:15:08.521885    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0318 11:15:08.560519    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0318 11:15:08.582589    6988 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0318 11:15:08.594793    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0318 11:15:08.629378    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 11:15:08.663294    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0318 11:15:08.695346    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 11:15:08.728361    6988 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 11:15:08.761540    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0318 11:15:08.798837    6988 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 11:15:08.833460    6988 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 11:15:08.865173    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:15:09.076059    6988 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0318 11:15:09.111465    6988 start.go:494] detecting cgroup driver to use...
	I0318 11:15:09.123526    6988 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0318 11:15:09.164500    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 11:15:09.200792    6988 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 11:15:09.247572    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 11:15:09.285733    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 11:15:09.324017    6988 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0318 11:15:09.396860    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 11:15:09.423438    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 11:15:09.473183    6988 ssh_runner.go:195] Run: which cri-dockerd
	I0318 11:15:09.495100    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0318 11:15:09.514542    6988 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0318 11:15:09.562129    6988 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0318 11:15:09.768756    6988 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0318 11:15:09.962207    6988 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0318 11:15:09.962469    6988 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0318 11:15:10.009364    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:15:10.229626    6988 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 11:15:12.807677    6988 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.578035s)
	I0318 11:15:12.822604    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0318 11:15:12.863840    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 11:15:12.902801    6988 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0318 11:15:13.113512    6988 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0318 11:15:13.323879    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:15:13.544907    6988 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0318 11:15:13.589406    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 11:15:13.627073    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:15:13.860016    6988 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0318 11:15:13.984295    6988 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0318 11:15:13.997606    6988 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0318 11:15:14.006456    6988 start.go:562] Will wait 60s for crictl version
	I0318 11:15:14.018351    6988 ssh_runner.go:195] Run: which crictl
	I0318 11:15:14.038735    6988 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 11:15:14.123965    6988 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.4
	RuntimeApiVersion:  v1
	I0318 11:15:14.134375    6988 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 11:15:14.180338    6988 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 11:15:14.219031    6988 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	I0318 11:15:14.219117    6988 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0318 11:15:14.223489    6988 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0318 11:15:14.223489    6988 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0318 11:15:14.223489    6988 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0318 11:15:14.223489    6988 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:ae:0d:2c Flags:up|broadcast|multicast|running}
	I0318 11:15:14.225997    6988 ip.go:210] interface addr: fe80::f8a6:d6b6:cc4:1ba0/64
	I0318 11:15:14.225997    6988 ip.go:210] interface addr: 172.25.144.1/20
	I0318 11:15:14.236688    6988 ssh_runner.go:195] Run: grep 172.25.144.1	host.minikube.internal$ /etc/hosts
	I0318 11:15:14.244226    6988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 11:15:14.282730    6988 kubeadm.go:877] updating cluster {Name:ha-606900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4
ClusterName:ha-606900 Namespace:default APIServerHAVIP:172.25.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.148.74 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 11:15:14.282730    6988 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 11:15:14.291096    6988 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 11:15:14.317091    6988 docker.go:685] Got preloaded images: 
	I0318 11:15:14.317091    6988 docker.go:691] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0318 11:15:14.329604    6988 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0318 11:15:14.361319    6988 ssh_runner.go:195] Run: which lz4
	I0318 11:15:14.367641    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0318 11:15:14.382177    6988 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 11:15:14.390907    6988 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 11:15:14.391708    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I0318 11:15:16.867361    6988 docker.go:649] duration metric: took 2.4993461s to copy over tarball
	I0318 11:15:16.879350    6988 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 11:15:27.213341    6988 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (10.333927s)
	I0318 11:15:27.213341    6988 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 11:15:27.289626    6988 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0318 11:15:27.311710    6988 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0318 11:15:27.358685    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:15:27.584081    6988 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 11:15:30.940835    6988 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3567329s)
	I0318 11:15:30.950523    6988 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 11:15:30.977200    6988 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0318 11:15:30.977200    6988 cache_images.go:84] Images are preloaded, skipping loading
	I0318 11:15:30.977200    6988 kubeadm.go:928] updating node { 172.25.148.74 8443 v1.28.4 docker true true} ...
	I0318 11:15:30.977634    6988 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-606900 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.148.74
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-606900 Namespace:default APIServerHAVIP:172.25.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 11:15:30.985615    6988 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0318 11:15:31.023676    6988 cni.go:84] Creating CNI manager for ""
	I0318 11:15:31.023676    6988 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0318 11:15:31.023676    6988 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 11:15:31.023676    6988 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.25.148.74 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-606900 NodeName:ha-606900 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.25.148.74"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.25.148.74 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 11:15:31.024296    6988 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.25.148.74
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-606900"
	  kubeletExtraArgs:
	    node-ip: 172.25.148.74
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.25.148.74"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 11:15:31.024441    6988 kube-vip.go:111] generating kube-vip config ...
	I0318 11:15:31.036223    6988 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0318 11:15:31.063438    6988 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0318 11:15:31.064129    6988 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.25.159.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0318 11:15:31.077227    6988 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 11:15:31.095777    6988 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 11:15:31.108183    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0318 11:15:31.131556    6988 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0318 11:15:31.166630    6988 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 11:15:31.203761    6988 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0318 11:15:31.236134    6988 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0318 11:15:31.278196    6988 ssh_runner.go:195] Run: grep 172.25.159.254	control-plane.minikube.internal$ /etc/hosts
	I0318 11:15:31.284172    6988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.159.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 11:15:31.318863    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:15:31.524915    6988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 11:15:31.556536    6988 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900 for IP: 172.25.148.74
	I0318 11:15:31.556573    6988 certs.go:194] generating shared ca certs ...
	I0318 11:15:31.556573    6988 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:15:31.557159    6988 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0318 11:15:31.558254    6988 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0318 11:15:31.558476    6988 certs.go:256] generating profile certs ...
	I0318 11:15:31.559161    6988 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\client.key
	I0318 11:15:31.559246    6988 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\client.crt with IP's: []
	I0318 11:15:31.855026    6988 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\client.crt ...
	I0318 11:15:31.855026    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\client.crt: {Name:mkb17df9dd67cb5dcc5adc34992716fbc04b8b41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:15:31.856802    6988 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\client.key ...
	I0318 11:15:31.856802    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\client.key: {Name:mk23cfdf6c2c724d42b5d3e35a4719ab96f3e140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:15:31.858111    6988 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.key.cd5b8555
	I0318 11:15:31.858111    6988 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.crt.cd5b8555 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.148.74 172.25.159.254]
	I0318 11:15:31.987981    6988 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.crt.cd5b8555 ...
	I0318 11:15:31.987981    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.crt.cd5b8555: {Name:mkfbbb9130ee551e1f450e55254fc02f385a5205 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:15:31.988936    6988 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.key.cd5b8555 ...
	I0318 11:15:31.988936    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.key.cd5b8555: {Name:mk12dab8169c7d001cf37e7db396005c581d5ae9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:15:31.989971    6988 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.crt.cd5b8555 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.crt
	I0318 11:15:32.000772    6988 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.key.cd5b8555 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.key
	I0318 11:15:32.002943    6988 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\proxy-client.key
	I0318 11:15:32.002943    6988 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\proxy-client.crt with IP's: []
	I0318 11:15:32.265600    6988 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\proxy-client.crt ...
	I0318 11:15:32.265600    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\proxy-client.crt: {Name:mk61a0234249ded1fb4e50a22d45917a2fce202a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:15:32.266622    6988 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\proxy-client.key ...
	I0318 11:15:32.266622    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\proxy-client.key: {Name:mk3980679371cd64440cd7f69688f3489b72c0da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:15:32.267424    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 11:15:32.268442    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0318 11:15:32.268798    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 11:15:32.268926    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 11:15:32.269125    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 11:15:32.269125    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 11:15:32.269423    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 11:15:32.277703    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 11:15:32.278690    6988 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9120.pem (1338 bytes)
	W0318 11:15:32.279253    6988 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9120_empty.pem, impossibly tiny 0 bytes
	I0318 11:15:32.279253    6988 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0318 11:15:32.279685    6988 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0318 11:15:32.279830    6988 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0318 11:15:32.280073    6988 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0318 11:15:32.280686    6988 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem (1708 bytes)
	I0318 11:15:32.280686    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem -> /usr/share/ca-certificates/91202.pem
	I0318 11:15:32.280686    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:15:32.280686    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9120.pem -> /usr/share/ca-certificates/9120.pem
	I0318 11:15:32.281734    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 11:15:32.333780    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 11:15:32.383075    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 11:15:32.435769    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 11:15:32.491944    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 11:15:32.542185    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 11:15:32.592888    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 11:15:32.640426    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 11:15:32.685142    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem --> /usr/share/ca-certificates/91202.pem (1708 bytes)
	I0318 11:15:32.728594    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 11:15:32.772436    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9120.pem --> /usr/share/ca-certificates/9120.pem (1338 bytes)
	I0318 11:15:32.820527    6988 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 11:15:32.865090    6988 ssh_runner.go:195] Run: openssl version
	I0318 11:15:32.885993    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/91202.pem && ln -fs /usr/share/ca-certificates/91202.pem /etc/ssl/certs/91202.pem"
	I0318 11:15:32.919499    6988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91202.pem
	I0318 11:15:32.926156    6988 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 10:53 /usr/share/ca-certificates/91202.pem
	I0318 11:15:32.938534    6988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91202.pem
	I0318 11:15:32.961726    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/91202.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 11:15:32.992688    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 11:15:33.025552    6988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:15:33.032459    6988 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:15:33.044488    6988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:15:33.065870    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 11:15:33.097051    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9120.pem && ln -fs /usr/share/ca-certificates/9120.pem /etc/ssl/certs/9120.pem"
	I0318 11:15:33.127658    6988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9120.pem
	I0318 11:15:33.136046    6988 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 10:53 /usr/share/ca-certificates/9120.pem
	I0318 11:15:33.147689    6988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9120.pem
	I0318 11:15:33.170767    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9120.pem /etc/ssl/certs/51391683.0"
	I0318 11:15:33.203224    6988 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 11:15:33.208830    6988 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 11:15:33.208830    6988 kubeadm.go:391] StartCluster: {Name:ha-606900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clu
sterName:ha-606900 Namespace:default APIServerHAVIP:172.25.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.148.74 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 11:15:33.218535    6988 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 11:15:33.258468    6988 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0318 11:15:33.290524    6988 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 11:15:33.323280    6988 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 11:15:33.340826    6988 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 11:15:33.340826    6988 kubeadm.go:156] found existing configuration files:
	
	I0318 11:15:33.354030    6988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 11:15:33.370382    6988 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 11:15:33.383195    6988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 11:15:33.411138    6988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 11:15:33.427231    6988 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 11:15:33.439051    6988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 11:15:33.468668    6988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 11:15:33.483589    6988 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 11:15:33.495256    6988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 11:15:33.528264    6988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 11:15:33.545588    6988 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 11:15:33.557341    6988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 11:15:33.575885    6988 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 11:15:34.068559    6988 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 11:15:49.726695    6988 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 11:15:49.726695    6988 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 11:15:49.726695    6988 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 11:15:49.726695    6988 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 11:15:49.727737    6988 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 11:15:49.727737    6988 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 11:15:49.730757    6988 out.go:204]   - Generating certificates and keys ...
	I0318 11:15:49.730757    6988 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 11:15:49.731098    6988 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 11:15:49.731098    6988 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0318 11:15:49.731377    6988 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0318 11:15:49.731496    6988 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0318 11:15:49.731617    6988 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0318 11:15:49.731731    6988 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0318 11:15:49.732058    6988 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-606900 localhost] and IPs [172.25.148.74 127.0.0.1 ::1]
	I0318 11:15:49.732203    6988 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0318 11:15:49.732464    6988 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-606900 localhost] and IPs [172.25.148.74 127.0.0.1 ::1]
	I0318 11:15:49.732464    6988 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0318 11:15:49.732679    6988 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0318 11:15:49.732809    6988 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0318 11:15:49.733056    6988 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 11:15:49.733178    6988 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 11:15:49.733300    6988 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 11:15:49.733434    6988 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 11:15:49.733554    6988 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 11:15:49.733799    6988 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 11:15:49.733886    6988 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 11:15:49.736782    6988 out.go:204]   - Booting up control plane ...
	I0318 11:15:49.737337    6988 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 11:15:49.737598    6988 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 11:15:49.737652    6988 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 11:15:49.737652    6988 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 11:15:49.738540    6988 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 11:15:49.738540    6988 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 11:15:49.738540    6988 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 11:15:49.738540    6988 kubeadm.go:309] [apiclient] All control plane components are healthy after 9.625069 seconds
	I0318 11:15:49.738540    6988 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 11:15:49.738540    6988 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 11:15:49.738540    6988 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 11:15:49.738540    6988 kubeadm.go:309] [mark-control-plane] Marking the node ha-606900 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 11:15:49.738540    6988 kubeadm.go:309] [bootstrap-token] Using token: 36xohw.fzoxxaltg9qrulz1
	I0318 11:15:49.743396    6988 out.go:204]   - Configuring RBAC rules ...
	I0318 11:15:49.743396    6988 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 11:15:49.743396    6988 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 11:15:49.743396    6988 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 11:15:49.744451    6988 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 11:15:49.744451    6988 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 11:15:49.744451    6988 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 11:15:49.745163    6988 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 11:15:49.745163    6988 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 11:15:49.745163    6988 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 11:15:49.745163    6988 kubeadm.go:309] 
	I0318 11:15:49.745163    6988 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 11:15:49.745163    6988 kubeadm.go:309] 
	I0318 11:15:49.745163    6988 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 11:15:49.745163    6988 kubeadm.go:309] 
	I0318 11:15:49.745163    6988 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 11:15:49.745163    6988 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 11:15:49.745163    6988 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 11:15:49.745163    6988 kubeadm.go:309] 
	I0318 11:15:49.746192    6988 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 11:15:49.746192    6988 kubeadm.go:309] 
	I0318 11:15:49.746192    6988 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 11:15:49.746192    6988 kubeadm.go:309] 
	I0318 11:15:49.746192    6988 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 11:15:49.746192    6988 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 11:15:49.746192    6988 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 11:15:49.746192    6988 kubeadm.go:309] 
	I0318 11:15:49.747189    6988 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 11:15:49.747189    6988 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 11:15:49.747189    6988 kubeadm.go:309] 
	I0318 11:15:49.747189    6988 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 36xohw.fzoxxaltg9qrulz1 \
	I0318 11:15:49.747189    6988 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1315b336657f971045d436062c4002c5bfe51c3e72afc075449943f75abc0cef \
	I0318 11:15:49.747189    6988 kubeadm.go:309] 	--control-plane 
	I0318 11:15:49.747189    6988 kubeadm.go:309] 
	I0318 11:15:49.747189    6988 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 11:15:49.748182    6988 kubeadm.go:309] 
	I0318 11:15:49.748182    6988 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 36xohw.fzoxxaltg9qrulz1 \
	I0318 11:15:49.748182    6988 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1315b336657f971045d436062c4002c5bfe51c3e72afc075449943f75abc0cef 
	I0318 11:15:49.748182    6988 cni.go:84] Creating CNI manager for ""
	I0318 11:15:49.748182    6988 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0318 11:15:49.752917    6988 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0318 11:15:49.769298    6988 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0318 11:15:49.777103    6988 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0318 11:15:49.777162    6988 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0318 11:15:49.863177    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0318 11:15:51.610272    6988 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.7470842s)
	I0318 11:15:51.610377    6988 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 11:15:51.624674    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:15:51.625674    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-606900 minikube.k8s.io/updated_at=2024_03_18T11_15_51_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd minikube.k8s.io/name=ha-606900 minikube.k8s.io/primary=true
	I0318 11:15:51.642702    6988 ops.go:34] apiserver oom_adj: -16
	I0318 11:15:51.917390    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:15:52.432422    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:15:52.921651    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:15:53.423594    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:15:53.927648    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:15:54.427276    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:15:54.927780    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:15:55.432091    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:15:55.918441    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:15:56.423358    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:15:56.924624    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:15:57.430536    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:15:57.919816    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:15:58.420603    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:15:58.924829    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:15:59.416909    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:15:59.922866    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:16:00.426554    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:16:00.934218    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:16:01.424216    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:16:01.928826    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:16:02.422204    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:16:02.926125    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:16:03.215851    6988 kubeadm.go:1107] duration metric: took 11.6052726s to wait for elevateKubeSystemPrivileges
	W0318 11:16:03.215965    6988 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 11:16:03.215965    6988 kubeadm.go:393] duration metric: took 30.0069459s to StartCluster
	I0318 11:16:03.215965    6988 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:16:03.215965    6988 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0318 11:16:03.217451    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:16:03.219612    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0318 11:16:03.219612    6988 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.25.148.74 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 11:16:03.219782    6988 start.go:240] waiting for startup goroutines ...
	I0318 11:16:03.219782    6988 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 11:16:03.219930    6988 addons.go:69] Setting storage-provisioner=true in profile "ha-606900"
	I0318 11:16:03.219930    6988 addons.go:234] Setting addon storage-provisioner=true in "ha-606900"
	I0318 11:16:03.219930    6988 addons.go:69] Setting default-storageclass=true in profile "ha-606900"
	I0318 11:16:03.220089    6988 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-606900"
	I0318 11:16:03.220163    6988 host.go:66] Checking if "ha-606900" exists ...
	I0318 11:16:03.220514    6988 config.go:182] Loaded profile config "ha-606900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:16:03.221165    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:16:03.221858    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:16:03.435924    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.25.144.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0318 11:16:04.151307    6988 start.go:948] {"host.minikube.internal": 172.25.144.1} host record injected into CoreDNS's ConfigMap
	I0318 11:16:05.622751    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:16:05.622824    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:05.625797    6988 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 11:16:05.622824    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:16:05.625842    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:05.628075    6988 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 11:16:05.628193    6988 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 11:16:05.628257    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:16:05.629123    6988 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0318 11:16:05.630198    6988 kapi.go:59] client config for ha-606900: &rest.Config{Host:"https://172.25.159.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-606900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-606900\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x226b2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 11:16:05.632140    6988 cert_rotation.go:137] Starting client certificate rotation controller
	I0318 11:16:05.632217    6988 addons.go:234] Setting addon default-storageclass=true in "ha-606900"
	I0318 11:16:05.632217    6988 host.go:66] Checking if "ha-606900" exists ...
	I0318 11:16:05.633560    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:16:08.033179    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:16:08.033239    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:08.033376    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:16:08.033440    6988 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 11:16:08.033494    6988 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 11:16:08.033440    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:08.033622    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:16:08.033622    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:16:10.409117    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:16:10.409117    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:10.409295    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:16:10.902325    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:16:10.902325    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:10.903400    6988 sshutil.go:53] new ssh client: &{IP:172.25.148.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900\id_rsa Username:docker}
	I0318 11:16:11.060410    6988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 11:16:13.179053    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:16:13.180033    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:13.180625    6988 sshutil.go:53] new ssh client: &{IP:172.25.148.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900\id_rsa Username:docker}
	I0318 11:16:13.332329    6988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 11:16:13.636035    6988 round_trippers.go:463] GET https://172.25.159.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0318 11:16:13.636106    6988 round_trippers.go:469] Request Headers:
	I0318 11:16:13.636106    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:16:13.636106    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:16:13.651451    6988 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0318 11:16:13.653120    6988 round_trippers.go:463] PUT https://172.25.159.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0318 11:16:13.653189    6988 round_trippers.go:469] Request Headers:
	I0318 11:16:13.653189    6988 round_trippers.go:473]     Content-Type: application/json
	I0318 11:16:13.653189    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:16:13.653189    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:16:13.658538    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:16:13.663668    6988 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0318 11:16:13.665252    6988 addons.go:505] duration metric: took 10.445404s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0318 11:16:13.665795    6988 start.go:245] waiting for cluster config update ...
	I0318 11:16:13.665854    6988 start.go:254] writing updated cluster config ...
	I0318 11:16:13.668278    6988 out.go:177] 
	I0318 11:16:13.680992    6988 config.go:182] Loaded profile config "ha-606900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:16:13.680992    6988 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\config.json ...
	I0318 11:16:13.686997    6988 out.go:177] * Starting "ha-606900-m02" control-plane node in "ha-606900" cluster
	I0318 11:16:13.691983    6988 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 11:16:13.691983    6988 cache.go:56] Caching tarball of preloaded images
	I0318 11:16:13.692511    6988 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0318 11:16:13.692870    6988 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 11:16:13.693020    6988 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\config.json ...
	I0318 11:16:13.695637    6988 start.go:360] acquireMachinesLock for ha-606900-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 11:16:13.695741    6988 start.go:364] duration metric: took 75.6µs to acquireMachinesLock for "ha-606900-m02"
	I0318 11:16:13.696049    6988 start.go:93] Provisioning new machine with config: &{Name:ha-606900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.28.4 ClusterName:ha-606900 Namespace:default APIServerHAVIP:172.25.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.148.74 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 11:16:13.696297    6988 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0318 11:16:13.704521    6988 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 11:16:13.704521    6988 start.go:159] libmachine.API.Create for "ha-606900" (driver="hyperv")
	I0318 11:16:13.704521    6988 client.go:168] LocalClient.Create starting
	I0318 11:16:13.705500    6988 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0318 11:16:13.705784    6988 main.go:141] libmachine: Decoding PEM data...
	I0318 11:16:13.705899    6988 main.go:141] libmachine: Parsing certificate...
	I0318 11:16:13.705958    6988 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0318 11:16:13.706150    6988 main.go:141] libmachine: Decoding PEM data...
	I0318 11:16:13.706150    6988 main.go:141] libmachine: Parsing certificate...
	I0318 11:16:13.706150    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0318 11:16:15.734669    6988 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0318 11:16:15.735241    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:15.735241    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0318 11:16:17.599830    6988 main.go:141] libmachine: [stdout =====>] : False
	
	I0318 11:16:17.599933    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:17.599933    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0318 11:16:19.141929    6988 main.go:141] libmachine: [stdout =====>] : True
	
	I0318 11:16:19.141929    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:19.142081    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0318 11:16:22.814700    6988 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0318 11:16:22.815207    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:22.817521    6988 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0318 11:16:23.292263    6988 main.go:141] libmachine: Creating SSH key...
	I0318 11:16:23.676906    6988 main.go:141] libmachine: Creating VM...
	I0318 11:16:23.676906    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0318 11:16:26.676845    6988 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0318 11:16:26.676845    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:26.677103    6988 main.go:141] libmachine: Using switch "Default Switch"
	I0318 11:16:26.677103    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0318 11:16:28.582824    6988 main.go:141] libmachine: [stdout =====>] : True
	
	I0318 11:16:28.582904    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:28.582904    6988 main.go:141] libmachine: Creating VHD
	I0318 11:16:28.582983    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0318 11:16:32.465012    6988 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 94A352C9-CBC6-4F8B-B8FF-75EA329F7583
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0318 11:16:32.465310    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:32.465310    6988 main.go:141] libmachine: Writing magic tar header
	I0318 11:16:32.465415    6988 main.go:141] libmachine: Writing SSH key tar header
	I0318 11:16:32.474675    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0318 11:16:35.736119    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:16:35.736723    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:35.736949    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m02\disk.vhd' -SizeBytes 20000MB
	I0318 11:16:38.352859    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:16:38.352932    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:38.353004    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-606900-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0318 11:16:42.115273    6988 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-606900-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0318 11:16:42.115714    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:42.115714    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-606900-m02 -DynamicMemoryEnabled $false
	I0318 11:16:44.482472    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:16:44.482687    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:44.482687    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-606900-m02 -Count 2
	I0318 11:16:46.739235    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:16:46.739235    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:46.739377    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-606900-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m02\boot2docker.iso'
	I0318 11:16:49.360076    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:16:49.360076    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:49.360076    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-606900-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m02\disk.vhd'
	I0318 11:16:52.091800    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:16:52.091800    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:52.091800    6988 main.go:141] libmachine: Starting VM...
	I0318 11:16:52.092051    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-606900-m02
	I0318 11:16:55.266231    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:16:55.266231    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:55.266327    6988 main.go:141] libmachine: Waiting for host to start...
	I0318 11:16:55.266327    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m02 ).state
	I0318 11:16:57.615984    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:16:57.615984    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:16:57.616059    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:17:00.236468    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:17:00.236468    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:01.249176    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m02 ).state
	I0318 11:17:03.508737    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:17:03.508764    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:03.508831    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:17:06.157366    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:17:06.157366    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:07.163178    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m02 ).state
	I0318 11:17:09.431219    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:17:09.431219    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:09.431219    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:17:12.040578    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:17:12.041680    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:13.057699    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m02 ).state
	I0318 11:17:15.368804    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:17:15.369739    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:15.369739    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:17:18.025466    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:17:18.025466    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:19.029802    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m02 ).state
	I0318 11:17:21.284031    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:17:21.284031    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:21.284728    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:17:23.944534    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.106
	
	I0318 11:17:23.945551    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:23.945644    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m02 ).state
	I0318 11:17:26.155981    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:17:26.155981    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:26.156320    6988 machine.go:94] provisionDockerMachine start ...
	I0318 11:17:26.156457    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m02 ).state
	I0318 11:17:28.429950    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:17:28.429950    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:28.429950    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:17:31.039834    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.106
	
	I0318 11:17:31.039834    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:31.046290    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:17:31.047039    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.106 22 <nil> <nil>}
	I0318 11:17:31.047039    6988 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 11:17:31.171897    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 11:17:31.171897    6988 buildroot.go:166] provisioning hostname "ha-606900-m02"
	I0318 11:17:31.171897    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m02 ).state
	I0318 11:17:33.382698    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:17:33.382698    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:33.382698    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:17:36.023191    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.106
	
	I0318 11:17:36.023191    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:36.030310    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:17:36.030440    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.106 22 <nil> <nil>}
	I0318 11:17:36.030440    6988 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-606900-m02 && echo "ha-606900-m02" | sudo tee /etc/hostname
	I0318 11:17:36.194759    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-606900-m02
	
	I0318 11:17:36.194759    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m02 ).state
	I0318 11:17:38.410375    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:17:38.410375    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:38.410375    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:17:41.045337    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.106
	
	I0318 11:17:41.045337    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:41.052388    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:17:41.052388    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.106 22 <nil> <nil>}
	I0318 11:17:41.052932    6988 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-606900-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-606900-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-606900-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 11:17:41.195318    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 11:17:41.195379    6988 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0318 11:17:41.195479    6988 buildroot.go:174] setting up certificates
	I0318 11:17:41.195537    6988 provision.go:84] configureAuth start
	I0318 11:17:41.195586    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m02 ).state
	I0318 11:17:43.421138    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:17:43.421138    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:43.421484    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:17:46.097212    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.106
	
	I0318 11:17:46.097212    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:46.098241    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m02 ).state
	I0318 11:17:48.346410    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:17:48.346410    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:48.347146    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:17:50.989911    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.106
	
	I0318 11:17:50.990537    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:50.990537    6988 provision.go:143] copyHostCerts
	I0318 11:17:50.990693    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0318 11:17:50.990923    6988 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0318 11:17:50.990923    6988 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0318 11:17:50.991375    6988 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0318 11:17:50.992559    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0318 11:17:50.992870    6988 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0318 11:17:50.992870    6988 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0318 11:17:50.993181    6988 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0318 11:17:50.994276    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0318 11:17:50.994498    6988 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0318 11:17:50.994498    6988 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0318 11:17:50.994917    6988 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0318 11:17:50.995895    6988 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-606900-m02 san=[127.0.0.1 172.25.148.106 ha-606900-m02 localhost minikube]
	I0318 11:17:51.120589    6988 provision.go:177] copyRemoteCerts
	I0318 11:17:51.135575    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 11:17:51.135575    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m02 ).state
	I0318 11:17:53.324184    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:17:53.324184    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:53.324753    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:17:55.912764    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.106
	
	I0318 11:17:55.912764    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:55.913888    6988 sshutil.go:53] new ssh client: &{IP:172.25.148.106 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m02\id_rsa Username:docker}
	I0318 11:17:56.021152    6988 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8855469s)
	I0318 11:17:56.021152    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0318 11:17:56.022139    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 11:17:56.076377    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0318 11:17:56.077382    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0318 11:17:56.126875    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0318 11:17:56.127326    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 11:17:56.179437    6988 provision.go:87] duration metric: took 14.9838054s to configureAuth
	I0318 11:17:56.179475    6988 buildroot.go:189] setting minikube options for container-runtime
	I0318 11:17:56.179973    6988 config.go:182] Loaded profile config "ha-606900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:17:56.180132    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m02 ).state
	I0318 11:17:58.338047    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:17:58.339028    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:17:58.339028    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:18:00.950913    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.106
	
	I0318 11:18:00.951625    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:18:00.956862    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:18:00.957182    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.106 22 <nil> <nil>}
	I0318 11:18:00.957182    6988 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0318 11:18:01.082835    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0318 11:18:01.082835    6988 buildroot.go:70] root file system type: tmpfs
	I0318 11:18:01.082835    6988 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0318 11:18:01.082835    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m02 ).state
	I0318 11:18:03.273278    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:18:03.274182    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:18:03.274182    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:18:05.926915    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.106
	
	I0318 11:18:05.927213    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:18:05.932751    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:18:05.933426    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.106 22 <nil> <nil>}
	I0318 11:18:05.933426    6988 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.25.148.74"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0318 11:18:06.084370    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.25.148.74
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0318 11:18:06.084370    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m02 ).state
	I0318 11:18:08.263097    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:18:08.263275    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:18:08.263361    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:18:10.867448    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.106
	
	I0318 11:18:10.867448    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:18:10.874852    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:18:10.874852    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.106 22 <nil> <nil>}
	I0318 11:18:10.874852    6988 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0318 11:18:13.053268    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0318 11:18:13.053344    6988 machine.go:97] duration metric: took 46.896728s to provisionDockerMachine
	I0318 11:18:13.053398    6988 client.go:171] duration metric: took 1m59.3475434s to LocalClient.Create
	I0318 11:18:13.053435    6988 start.go:167] duration metric: took 1m59.3481627s to libmachine.API.Create "ha-606900"
	I0318 11:18:13.053435    6988 start.go:293] postStartSetup for "ha-606900-m02" (driver="hyperv")
	I0318 11:18:13.053506    6988 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 11:18:13.066051    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 11:18:13.067095    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m02 ).state
	I0318 11:18:15.266461    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:18:15.266461    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:18:15.266461    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:18:17.923989    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.106
	
	I0318 11:18:17.923989    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:18:17.924644    6988 sshutil.go:53] new ssh client: &{IP:172.25.148.106 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m02\id_rsa Username:docker}
	I0318 11:18:18.028046    6988 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9619632s)
	I0318 11:18:18.043354    6988 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 11:18:18.050399    6988 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 11:18:18.050458    6988 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0318 11:18:18.050620    6988 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0318 11:18:18.051737    6988 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem -> 91202.pem in /etc/ssl/certs
	I0318 11:18:18.051737    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem -> /etc/ssl/certs/91202.pem
	I0318 11:18:18.065814    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 11:18:18.084375    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem --> /etc/ssl/certs/91202.pem (1708 bytes)
	I0318 11:18:18.134545    6988 start.go:296] duration metric: took 5.0810774s for postStartSetup
	I0318 11:18:18.138696    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m02 ).state
	I0318 11:18:20.369387    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:18:20.369586    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:18:20.369586    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:18:22.964920    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.106
	
	I0318 11:18:22.964920    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:18:22.965825    6988 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\config.json ...
	I0318 11:18:22.969150    6988 start.go:128] duration metric: took 2m9.2720381s to createHost
	I0318 11:18:22.969472    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m02 ).state
	I0318 11:18:25.170668    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:18:25.170668    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:18:25.170668    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:18:27.838464    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.106
	
	I0318 11:18:27.838464    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:18:27.843839    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:18:27.844381    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.106 22 <nil> <nil>}
	I0318 11:18:27.844586    6988 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 11:18:27.974384    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710760707.958099644
	
	I0318 11:18:27.974384    6988 fix.go:216] guest clock: 1710760707.958099644
	I0318 11:18:27.974384    6988 fix.go:229] Guest: 2024-03-18 11:18:27.958099644 +0000 UTC Remote: 2024-03-18 11:18:22.9691501 +0000 UTC m=+347.321305401 (delta=4.988949544s)
	I0318 11:18:27.974384    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m02 ).state
	I0318 11:18:30.179075    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:18:30.179652    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:18:30.179652    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:18:32.785204    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.106
	
	I0318 11:18:32.785204    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:18:32.791008    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:18:32.791684    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.106 22 <nil> <nil>}
	I0318 11:18:32.791684    6988 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1710760707
	I0318 11:18:32.926747    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Mar 18 11:18:27 UTC 2024
	
	I0318 11:18:32.926818    6988 fix.go:236] clock set: Mon Mar 18 11:18:27 UTC 2024
	 (err=<nil>)
	I0318 11:18:32.926818    6988 start.go:83] releasing machines lock for "ha-606900-m02", held for 2m19.2301296s
	I0318 11:18:32.927115    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m02 ).state
	I0318 11:18:35.100434    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:18:35.101267    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:18:35.101267    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:18:37.763215    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.106
	
	I0318 11:18:37.763215    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:18:37.766012    6988 out.go:177] * Found network options:
	I0318 11:18:37.768781    6988 out.go:177]   - NO_PROXY=172.25.148.74
	W0318 11:18:37.771834    6988 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 11:18:37.775295    6988 out.go:177]   - NO_PROXY=172.25.148.74
	W0318 11:18:37.778570    6988 proxy.go:119] fail to check proxy env: Error ip not in block
	W0318 11:18:37.779944    6988 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 11:18:37.783608    6988 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 11:18:37.783739    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m02 ).state
	I0318 11:18:37.795320    6988 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0318 11:18:37.795320    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m02 ).state
	I0318 11:18:40.069228    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:18:40.069228    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:18:40.069335    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:18:40.078448    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:18:40.078448    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:18:40.078448    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:18:42.807693    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.106
	
	I0318 11:18:42.807882    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:18:42.808463    6988 sshutil.go:53] new ssh client: &{IP:172.25.148.106 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m02\id_rsa Username:docker}
	I0318 11:18:42.832137    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.106
	
	I0318 11:18:42.832776    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:18:42.833185    6988 sshutil.go:53] new ssh client: &{IP:172.25.148.106 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m02\id_rsa Username:docker}
	I0318 11:18:42.995256    6988 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1999034s)
	I0318 11:18:42.995256    6988 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2116157s)
	W0318 11:18:42.995256    6988 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 11:18:43.008592    6988 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 11:18:43.039208    6988 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 11:18:43.039208    6988 start.go:494] detecting cgroup driver to use...
	I0318 11:18:43.039208    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 11:18:43.088819    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0318 11:18:43.125725    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0318 11:18:43.145356    6988 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0318 11:18:43.157802    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0318 11:18:43.195262    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 11:18:43.226901    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0318 11:18:43.265266    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 11:18:43.296209    6988 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 11:18:43.327219    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0318 11:18:43.359989    6988 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 11:18:43.392395    6988 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 11:18:43.422385    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:18:43.640591    6988 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0318 11:18:43.676274    6988 start.go:494] detecting cgroup driver to use...
	I0318 11:18:43.686572    6988 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0318 11:18:43.726055    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 11:18:43.762601    6988 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 11:18:43.812441    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 11:18:43.849220    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 11:18:43.888333    6988 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0318 11:18:43.955295    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 11:18:43.980270    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 11:18:44.032307    6988 ssh_runner.go:195] Run: which cri-dockerd
	I0318 11:18:44.049882    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0318 11:18:44.071945    6988 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0318 11:18:44.121312    6988 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0318 11:18:44.339527    6988 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0318 11:18:44.543976    6988 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0318 11:18:44.544044    6988 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0318 11:18:44.592143    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:18:44.804315    6988 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 11:18:47.357918    6988 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5525709s)
	I0318 11:18:47.370766    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0318 11:18:47.411486    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 11:18:47.449152    6988 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0318 11:18:47.669084    6988 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0318 11:18:47.876260    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:18:48.095103    6988 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0318 11:18:48.135862    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 11:18:48.173199    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:18:48.378858    6988 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0318 11:18:48.487730    6988 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0318 11:18:48.501259    6988 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0318 11:18:48.512010    6988 start.go:562] Will wait 60s for crictl version
	I0318 11:18:48.523923    6988 ssh_runner.go:195] Run: which crictl
	I0318 11:18:48.542717    6988 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 11:18:48.618593    6988 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.4
	RuntimeApiVersion:  v1
	I0318 11:18:48.628679    6988 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 11:18:48.676020    6988 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 11:18:48.716171    6988 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	I0318 11:18:48.719341    6988 out.go:177]   - env NO_PROXY=172.25.148.74
	I0318 11:18:48.726055    6988 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0318 11:18:48.730688    6988 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0318 11:18:48.730688    6988 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0318 11:18:48.730688    6988 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0318 11:18:48.730688    6988 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:ae:0d:2c Flags:up|broadcast|multicast|running}
	I0318 11:18:48.733137    6988 ip.go:210] interface addr: fe80::f8a6:d6b6:cc4:1ba0/64
	I0318 11:18:48.733137    6988 ip.go:210] interface addr: 172.25.144.1/20
	I0318 11:18:48.746157    6988 ssh_runner.go:195] Run: grep 172.25.144.1	host.minikube.internal$ /etc/hosts
	I0318 11:18:48.753711    6988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 11:18:48.776669    6988 mustload.go:65] Loading cluster: ha-606900
	I0318 11:18:48.777817    6988 config.go:182] Loaded profile config "ha-606900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:18:48.778528    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:18:50.955932    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:18:50.956379    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:18:50.956420    6988 host.go:66] Checking if "ha-606900" exists ...
	I0318 11:18:50.956675    6988 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900 for IP: 172.25.148.106
	I0318 11:18:50.956675    6988 certs.go:194] generating shared ca certs ...
	I0318 11:18:50.956675    6988 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:18:50.957756    6988 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0318 11:18:50.958266    6988 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0318 11:18:50.958376    6988 certs.go:256] generating profile certs ...
	I0318 11:18:50.959398    6988 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\client.key
	I0318 11:18:50.959733    6988 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.key.2a060aa0
	I0318 11:18:50.960133    6988 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.crt.2a060aa0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.148.74 172.25.148.106 172.25.159.254]
	I0318 11:18:51.197801    6988 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.crt.2a060aa0 ...
	I0318 11:18:51.197801    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.crt.2a060aa0: {Name:mk9d7ae9ba5a8c0b27ce142ce3b747943c789334 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:18:51.199611    6988 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.key.2a060aa0 ...
	I0318 11:18:51.199611    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.key.2a060aa0: {Name:mk24deb8d747f6290c3ccba9faa6f7a8d3fb3ba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:18:51.200552    6988 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.crt.2a060aa0 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.crt
	I0318 11:18:51.212892    6988 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.key.2a060aa0 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.key
	I0318 11:18:51.213842    6988 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\proxy-client.key
	I0318 11:18:51.214833    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 11:18:51.214974    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0318 11:18:51.215200    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 11:18:51.215399    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 11:18:51.215605    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 11:18:51.215769    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 11:18:51.215949    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 11:18:51.216102    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 11:18:51.216279    6988 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9120.pem (1338 bytes)
	W0318 11:18:51.216279    6988 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9120_empty.pem, impossibly tiny 0 bytes
	I0318 11:18:51.216891    6988 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0318 11:18:51.217144    6988 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0318 11:18:51.217368    6988 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0318 11:18:51.217368    6988 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0318 11:18:51.217368    6988 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem (1708 bytes)
	I0318 11:18:51.217368    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9120.pem -> /usr/share/ca-certificates/9120.pem
	I0318 11:18:51.217368    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem -> /usr/share/ca-certificates/91202.pem
	I0318 11:18:51.217368    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:18:51.217368    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:18:53.387606    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:18:53.387606    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:18:53.388037    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:18:56.015490    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:18:56.016542    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:18:56.016760    6988 sshutil.go:53] new ssh client: &{IP:172.25.148.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900\id_rsa Username:docker}
	I0318 11:18:56.114235    6988 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0318 11:18:56.121995    6988 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0318 11:18:56.155759    6988 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0318 11:18:56.163678    6988 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0318 11:18:56.200671    6988 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0318 11:18:56.208895    6988 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0318 11:18:56.244780    6988 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0318 11:18:56.252608    6988 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0318 11:18:56.286707    6988 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0318 11:18:56.293568    6988 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0318 11:18:56.342878    6988 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0318 11:18:56.350781    6988 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0318 11:18:56.371657    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 11:18:56.427795    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 11:18:56.486459    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 11:18:56.534836    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 11:18:56.585359    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0318 11:18:56.630393    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 11:18:56.682727    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 11:18:56.731758    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 11:18:56.779679    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9120.pem --> /usr/share/ca-certificates/9120.pem (1338 bytes)
	I0318 11:18:56.831849    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem --> /usr/share/ca-certificates/91202.pem (1708 bytes)
	I0318 11:18:56.882835    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 11:18:56.933127    6988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0318 11:18:56.966877    6988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0318 11:18:57.003068    6988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0318 11:18:57.038971    6988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0318 11:18:57.074346    6988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0318 11:18:57.110027    6988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0318 11:18:57.144910    6988 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0318 11:18:57.203717    6988 ssh_runner.go:195] Run: openssl version
	I0318 11:18:57.228299    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 11:18:57.260581    6988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:18:57.268119    6988 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:18:57.280738    6988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:18:57.303370    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 11:18:57.337414    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9120.pem && ln -fs /usr/share/ca-certificates/9120.pem /etc/ssl/certs/9120.pem"
	I0318 11:18:57.372167    6988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9120.pem
	I0318 11:18:57.379760    6988 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 10:53 /usr/share/ca-certificates/9120.pem
	I0318 11:18:57.393047    6988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9120.pem
	I0318 11:18:57.415692    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9120.pem /etc/ssl/certs/51391683.0"
	I0318 11:18:57.454750    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/91202.pem && ln -fs /usr/share/ca-certificates/91202.pem /etc/ssl/certs/91202.pem"
	I0318 11:18:57.488390    6988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91202.pem
	I0318 11:18:57.496913    6988 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 10:53 /usr/share/ca-certificates/91202.pem
	I0318 11:18:57.519801    6988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91202.pem
	I0318 11:18:57.543042    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/91202.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 11:18:57.584527    6988 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 11:18:57.591610    6988 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 11:18:57.591974    6988 kubeadm.go:928] updating node {m02 172.25.148.106 8443 v1.28.4 docker true true} ...
	I0318 11:18:57.591974    6988 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-606900-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.148.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-606900 Namespace:default APIServerHAVIP:172.25.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 11:18:57.591974    6988 kube-vip.go:111] generating kube-vip config ...
	I0318 11:18:57.604686    6988 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0318 11:18:57.634413    6988 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0318 11:18:57.634413    6988 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.25.159.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0318 11:18:57.646376    6988 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 11:18:57.671715    6988 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0318 11:18:57.683394    6988 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0318 11:18:57.706582    6988 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl
	I0318 11:18:57.706644    6988 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm
	I0318 11:18:57.706732    6988 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet
	I0318 11:18:58.653946    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0318 11:18:58.665920    6988 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0318 11:18:58.673924    6988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0318 11:18:58.673924    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0318 11:19:01.594637    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0318 11:19:01.611573    6988 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0318 11:19:01.624199    6988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0318 11:19:01.624628    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0318 11:19:05.617433    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 11:19:05.648435    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0318 11:19:05.659765    6988 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0318 11:19:05.668035    6988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0318 11:19:05.668265    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0318 11:19:06.362735    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0318 11:19:06.382756    6988 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0318 11:19:06.425562    6988 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 11:19:06.464896    6988 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0318 11:19:06.514097    6988 ssh_runner.go:195] Run: grep 172.25.159.254	control-plane.minikube.internal$ /etc/hosts
	I0318 11:19:06.519890    6988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.159.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 11:19:06.560914    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:19:06.795370    6988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 11:19:06.833813    6988 host.go:66] Checking if "ha-606900" exists ...
	I0318 11:19:06.834439    6988 start.go:316] joinCluster: &{Name:ha-606900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-606900 Namespace:default APIServerHAVIP:172.25.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.148.74 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.148.106 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 11:19:06.834439    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0318 11:19:06.834439    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:19:09.001061    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:19:09.001731    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:19:09.001994    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:19:11.676669    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:19:11.676669    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:19:11.677265    6988 sshutil.go:53] new ssh client: &{IP:172.25.148.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900\id_rsa Username:docker}
	I0318 11:19:12.030333    6988 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (5.1958608s)
	I0318 11:19:12.030522    6988 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.25.148.106 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 11:19:12.030576    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0x9lic.e93j8sfuv95zwn47 --discovery-token-ca-cert-hash sha256:1315b336657f971045d436062c4002c5bfe51c3e72afc075449943f75abc0cef --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-606900-m02 --control-plane --apiserver-advertise-address=172.25.148.106 --apiserver-bind-port=8443"
	I0318 11:20:13.769317    6988 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0x9lic.e93j8sfuv95zwn47 --discovery-token-ca-cert-hash sha256:1315b336657f971045d436062c4002c5bfe51c3e72afc075449943f75abc0cef --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-606900-m02 --control-plane --apiserver-advertise-address=172.25.148.106 --apiserver-bind-port=8443": (1m1.7382852s)
	I0318 11:20:13.769453    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0318 11:20:14.478554    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-606900-m02 minikube.k8s.io/updated_at=2024_03_18T11_20_14_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd minikube.k8s.io/name=ha-606900 minikube.k8s.io/primary=false
	I0318 11:20:14.680569    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-606900-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0318 11:20:14.856695    6988 start.go:318] duration metric: took 1m8.0218267s to joinCluster
	I0318 11:20:14.857680    6988 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.25.148.106 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 11:20:14.859688    6988 out.go:177] * Verifying Kubernetes components...
	I0318 11:20:14.858668    6988 config.go:182] Loaded profile config "ha-606900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:20:14.876681    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:20:15.349006    6988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 11:20:15.377688    6988 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0318 11:20:15.378453    6988 kapi.go:59] client config for ha-606900: &rest.Config{Host:"https://172.25.159.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-606900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-606900\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x226b2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0318 11:20:15.378640    6988 kubeadm.go:477] Overriding stale ClientConfig host https://172.25.159.254:8443 with https://172.25.148.74:8443
	I0318 11:20:15.379440    6988 node_ready.go:35] waiting up to 6m0s for node "ha-606900-m02" to be "Ready" ...
	I0318 11:20:15.379678    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:15.379762    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:15.379788    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:15.379788    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:15.395437    6988 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0318 11:20:15.882575    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:15.882575    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:15.882575    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:15.882575    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:15.889596    6988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 11:20:16.387165    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:16.387232    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:16.387232    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:16.387293    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:16.394827    6988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 11:20:16.893984    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:16.893984    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:16.893984    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:16.893984    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:16.898982    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:20:17.387108    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:17.387108    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:17.387108    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:17.387108    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:17.392218    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:20:17.392923    6988 node_ready.go:53] node "ha-606900-m02" has status "Ready":"False"
	I0318 11:20:17.895715    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:17.895786    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:17.895786    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:17.895786    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:17.902216    6988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 11:20:18.388771    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:18.388771    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:18.388771    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:18.388771    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:18.395072    6988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 11:20:18.880438    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:18.880438    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:18.880438    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:18.880438    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:18.888017    6988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 11:20:19.390780    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:19.391146    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:19.391146    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:19.391146    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:19.416551    6988 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I0318 11:20:19.417623    6988 node_ready.go:53] node "ha-606900-m02" has status "Ready":"False"
	I0318 11:20:19.881328    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:19.881328    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:19.881328    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:19.881328    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:19.887357    6988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 11:20:20.391926    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:20.391926    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:20.391926    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:20.391926    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:20.397921    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:20:20.884821    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:20.884821    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:20.885109    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:20.885109    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:20.893814    6988 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 11:20:21.391402    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:21.391402    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:21.391402    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:21.391402    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:21.407036    6988 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0318 11:20:21.880809    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:21.880809    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:21.880809    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:21.880809    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:21.886700    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:20:21.887444    6988 node_ready.go:53] node "ha-606900-m02" has status "Ready":"False"
	I0318 11:20:22.385448    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:22.385448    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:22.385766    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:22.385766    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:22.390544    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:20:22.894371    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:22.894371    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:22.894456    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:22.894456    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:22.899898    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:20:23.386476    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:23.386614    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:23.386614    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:23.386643    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:23.391253    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:20:23.889873    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:23.889873    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:23.889873    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:23.889873    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:23.895254    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:20:23.896284    6988 node_ready.go:49] node "ha-606900-m02" has status "Ready":"True"
	I0318 11:20:23.896358    6988 node_ready.go:38] duration metric: took 8.5168075s for node "ha-606900-m02" to be "Ready" ...
	I0318 11:20:23.896358    6988 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 11:20:23.896498    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods
	I0318 11:20:23.896498    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:23.896593    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:23.896736    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:23.904321    6988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 11:20:23.916702    6988 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-jsf9x" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:23.916702    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-jsf9x
	I0318 11:20:23.916702    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:23.916702    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:23.916702    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:23.920846    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:20:23.922462    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900
	I0318 11:20:23.922462    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:23.922518    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:23.922518    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:23.926562    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:20:23.928001    6988 pod_ready.go:92] pod "coredns-5dd5756b68-jsf9x" in "kube-system" namespace has status "Ready":"True"
	I0318 11:20:23.928061    6988 pod_ready.go:81] duration metric: took 11.359ms for pod "coredns-5dd5756b68-jsf9x" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:23.928061    6988 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-wvh6v" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:23.928259    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wvh6v
	I0318 11:20:23.928321    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:23.928321    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:23.928321    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:23.932012    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 11:20:23.933505    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900
	I0318 11:20:23.933505    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:23.933505    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:23.933505    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:23.937785    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:20:23.938321    6988 pod_ready.go:92] pod "coredns-5dd5756b68-wvh6v" in "kube-system" namespace has status "Ready":"True"
	I0318 11:20:23.938321    6988 pod_ready.go:81] duration metric: took 10.2599ms for pod "coredns-5dd5756b68-wvh6v" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:23.938321    6988 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-606900" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:23.938321    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-606900
	I0318 11:20:23.938321    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:23.938321    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:23.938321    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:23.943028    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:20:23.944363    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900
	I0318 11:20:23.944485    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:23.944485    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:23.944485    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:23.947776    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 11:20:23.949742    6988 pod_ready.go:92] pod "etcd-ha-606900" in "kube-system" namespace has status "Ready":"True"
	I0318 11:20:23.949742    6988 pod_ready.go:81] duration metric: took 11.4211ms for pod "etcd-ha-606900" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:23.949742    6988 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-606900-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:23.949925    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-606900-m02
	I0318 11:20:23.950000    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:23.950000    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:23.950000    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:23.953933    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 11:20:23.954614    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:23.954675    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:23.954675    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:23.954675    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:23.958273    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 11:20:23.959704    6988 pod_ready.go:92] pod "etcd-ha-606900-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 11:20:23.959704    6988 pod_ready.go:81] duration metric: took 9.9625ms for pod "etcd-ha-606900-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:23.959704    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-606900" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:24.091948    6988 request.go:629] Waited for 132.1648ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-606900
	I0318 11:20:24.092368    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-606900
	I0318 11:20:24.092368    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:24.092368    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:24.092368    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:24.097403    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:20:24.295753    6988 request.go:629] Waited for 196.7343ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900
	I0318 11:20:24.295962    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900
	I0318 11:20:24.296045    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:24.296045    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:24.296085    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:24.302389    6988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 11:20:24.304040    6988 pod_ready.go:92] pod "kube-apiserver-ha-606900" in "kube-system" namespace has status "Ready":"True"
	I0318 11:20:24.304040    6988 pod_ready.go:81] duration metric: took 344.3336ms for pod "kube-apiserver-ha-606900" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:24.304040    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-606900-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:24.498774    6988 request.go:629] Waited for 194.7324ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-606900-m02
	I0318 11:20:24.498942    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-606900-m02
	I0318 11:20:24.498942    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:24.498942    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:24.499054    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:24.505373    6988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 11:20:24.702752    6988 request.go:629] Waited for 196.6114ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:24.703037    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:24.703037    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:24.703037    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:24.703135    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:24.708858    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:20:24.709669    6988 pod_ready.go:92] pod "kube-apiserver-ha-606900-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 11:20:24.709669    6988 pod_ready.go:81] duration metric: took 405.626ms for pod "kube-apiserver-ha-606900-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:24.709669    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-606900" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:24.890744    6988 request.go:629] Waited for 180.9519ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-606900
	I0318 11:20:24.891017    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-606900
	I0318 11:20:24.891167    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:24.891207    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:24.891207    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:24.897057    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:20:25.095748    6988 request.go:629] Waited for 197.1813ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900
	I0318 11:20:25.095968    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900
	I0318 11:20:25.095968    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:25.095968    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:25.096032    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:25.101033    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:20:25.101939    6988 pod_ready.go:92] pod "kube-controller-manager-ha-606900" in "kube-system" namespace has status "Ready":"True"
	I0318 11:20:25.101939    6988 pod_ready.go:81] duration metric: took 392.2683ms for pod "kube-controller-manager-ha-606900" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:25.101939    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-606900-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:25.298400    6988 request.go:629] Waited for 196.4596ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-606900-m02
	I0318 11:20:25.298992    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-606900-m02
	I0318 11:20:25.298992    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:25.298992    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:25.298992    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:25.309741    6988 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0318 11:20:25.500693    6988 request.go:629] Waited for 189.227ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:25.501327    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:25.501373    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:25.501373    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:25.501373    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:25.508972    6988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 11:20:25.509708    6988 pod_ready.go:92] pod "kube-controller-manager-ha-606900-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 11:20:25.509708    6988 pod_ready.go:81] duration metric: took 407.766ms for pod "kube-controller-manager-ha-606900-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:25.509708    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fk4wg" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:25.704197    6988 request.go:629] Waited for 194.2953ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fk4wg
	I0318 11:20:25.704457    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fk4wg
	I0318 11:20:25.704534    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:25.704534    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:25.704534    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:25.709908    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:20:25.890618    6988 request.go:629] Waited for 178.9881ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900
	I0318 11:20:25.890787    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900
	I0318 11:20:25.890787    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:25.890787    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:25.890850    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:25.898251    6988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 11:20:25.898875    6988 pod_ready.go:92] pod "kube-proxy-fk4wg" in "kube-system" namespace has status "Ready":"True"
	I0318 11:20:25.898875    6988 pod_ready.go:81] duration metric: took 389.1645ms for pod "kube-proxy-fk4wg" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:25.898875    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s9lzf" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:26.094968    6988 request.go:629] Waited for 196.0917ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s9lzf
	I0318 11:20:26.095254    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s9lzf
	I0318 11:20:26.095329    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:26.095363    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:26.095363    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:26.100953    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:20:26.296438    6988 request.go:629] Waited for 194.0859ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:26.296929    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:26.296929    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:26.296929    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:26.296929    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:26.301985    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:20:26.303830    6988 pod_ready.go:92] pod "kube-proxy-s9lzf" in "kube-system" namespace has status "Ready":"True"
	I0318 11:20:26.303830    6988 pod_ready.go:81] duration metric: took 404.9527ms for pod "kube-proxy-s9lzf" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:26.303830    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-606900" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:26.503015    6988 request.go:629] Waited for 199.0789ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-606900
	I0318 11:20:26.503503    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-606900
	I0318 11:20:26.503503    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:26.503503    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:26.503503    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:26.510540    6988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 11:20:26.692698    6988 request.go:629] Waited for 181.092ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900
	I0318 11:20:26.693179    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900
	I0318 11:20:26.693240    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:26.693240    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:26.693240    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:26.715483    6988 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0318 11:20:26.717373    6988 pod_ready.go:92] pod "kube-scheduler-ha-606900" in "kube-system" namespace has status "Ready":"True"
	I0318 11:20:26.717432    6988 pod_ready.go:81] duration metric: took 413.5988ms for pod "kube-scheduler-ha-606900" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:26.717432    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-606900-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:26.897834    6988 request.go:629] Waited for 180.0383ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-606900-m02
	I0318 11:20:26.898028    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-606900-m02
	I0318 11:20:26.898028    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:26.898028    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:26.898174    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:26.906801    6988 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 11:20:27.101664    6988 request.go:629] Waited for 193.6185ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:27.101838    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:20:27.101838    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:27.101838    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:27.101838    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:27.110484    6988 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 11:20:27.112089    6988 pod_ready.go:92] pod "kube-scheduler-ha-606900-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 11:20:27.112089    6988 pod_ready.go:81] duration metric: took 394.591ms for pod "kube-scheduler-ha-606900-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:20:27.112089    6988 pod_ready.go:38] duration metric: took 3.2157104s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 11:20:27.112089    6988 api_server.go:52] waiting for apiserver process to appear ...
	I0318 11:20:27.126495    6988 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 11:20:27.160341    6988 api_server.go:72] duration metric: took 12.3021066s to wait for apiserver process to appear ...
	I0318 11:20:27.160412    6988 api_server.go:88] waiting for apiserver healthz status ...
	I0318 11:20:27.160412    6988 api_server.go:253] Checking apiserver healthz at https://172.25.148.74:8443/healthz ...
	I0318 11:20:27.169273    6988 api_server.go:279] https://172.25.148.74:8443/healthz returned 200:
	ok
	I0318 11:20:27.169998    6988 round_trippers.go:463] GET https://172.25.148.74:8443/version
	I0318 11:20:27.170108    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:27.170108    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:27.170108    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:27.172208    6988 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0318 11:20:27.172208    6988 api_server.go:141] control plane version: v1.28.4
	I0318 11:20:27.172208    6988 api_server.go:131] duration metric: took 11.7955ms to wait for apiserver health ...
	I0318 11:20:27.172208    6988 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 11:20:27.292506    6988 request.go:629] Waited for 119.8573ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods
	I0318 11:20:27.292594    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods
	I0318 11:20:27.292594    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:27.292594    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:27.292713    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:27.304042    6988 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0318 11:20:27.311738    6988 system_pods.go:59] 17 kube-system pods found
	I0318 11:20:27.311738    6988 system_pods.go:61] "coredns-5dd5756b68-jsf9x" [05681724-a32a-40c0-9f26-1c1eb9dffb65] Running
	I0318 11:20:27.311738    6988 system_pods.go:61] "coredns-5dd5756b68-wvh6v" [843ee0ec-fcfd-4763-8c92-acfe93bec900] Running
	I0318 11:20:27.311738    6988 system_pods.go:61] "etcd-ha-606900" [ed704c6d-aba3-496c-9988-c9f86218f1b4] Running
	I0318 11:20:27.311738    6988 system_pods.go:61] "etcd-ha-606900-m02" [a453b1e7-143c-4ea7-a1f4-f6dc6f8aa0b8] Running
	I0318 11:20:27.311738    6988 system_pods.go:61] "kindnet-8977g" [97e55124-90c8-4cda-854c-ee1059fafdac] Running
	I0318 11:20:27.311738    6988 system_pods.go:61] "kindnet-b68s4" [d2b7c03a-1303-4e1d-bf2b-2975716685d6] Running
	I0318 11:20:27.311738    6988 system_pods.go:61] "kube-apiserver-ha-606900" [90f9b505-a404-4227-8a93-8d74ab235009] Running
	I0318 11:20:27.311738    6988 system_pods.go:61] "kube-apiserver-ha-606900-m02" [b3373a21-b66f-42c9-a088-97e3a86cd9fd] Running
	I0318 11:20:27.311738    6988 system_pods.go:61] "kube-controller-manager-ha-606900" [d3660558-d0d0-430f-baeb-912cef1a751f] Running
	I0318 11:20:27.311738    6988 system_pods.go:61] "kube-controller-manager-ha-606900-m02" [93c8139a-db05-4492-a62d-13ecabdadab6] Running
	I0318 11:20:27.311738    6988 system_pods.go:61] "kube-proxy-fk4wg" [3b8fe48c-5035-4e97-9a79-73907e53d2ef] Running
	I0318 11:20:27.312264    6988 system_pods.go:61] "kube-proxy-s9lzf" [c0ba2c37-0dea-43c1-b2d4-ce36b6f6e9ff] Running
	I0318 11:20:27.312264    6988 system_pods.go:61] "kube-scheduler-ha-606900" [6efc4fea-f6fe-4057-96b0-fd62ba3aba5d] Running
	I0318 11:20:27.312264    6988 system_pods.go:61] "kube-scheduler-ha-606900-m02" [f1646aeb-90ea-46f7-a0f9-28b3b68f341c] Running
	I0318 11:20:27.312302    6988 system_pods.go:61] "kube-vip-ha-606900" [540ec4bc-f9bc-4710-be1e-bb289e8cbea4] Running
	I0318 11:20:27.312302    6988 system_pods.go:61] "kube-vip-ha-606900-m02" [9063c185-9922-4ca7-82df-34db3af5f0be] Running
	I0318 11:20:27.312302    6988 system_pods.go:61] "storage-provisioner" [d03b3748-8b89-4a55-9e0e-871a5b79532f] Running
	I0318 11:20:27.312302    6988 system_pods.go:74] duration metric: took 140.0933ms to wait for pod list to return data ...
	I0318 11:20:27.312346    6988 default_sa.go:34] waiting for default service account to be created ...
	I0318 11:20:27.495595    6988 request.go:629] Waited for 183.2264ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/default/serviceaccounts
	I0318 11:20:27.495907    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/default/serviceaccounts
	I0318 11:20:27.495907    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:27.495907    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:27.495907    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:27.505986    6988 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0318 11:20:27.507397    6988 default_sa.go:45] found service account: "default"
	I0318 11:20:27.507457    6988 default_sa.go:55] duration metric: took 195.0884ms for default service account to be created ...
	I0318 11:20:27.507457    6988 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 11:20:27.698288    6988 request.go:629] Waited for 190.4253ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods
	I0318 11:20:27.698510    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods
	I0318 11:20:27.698702    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:27.698715    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:27.698715    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:27.707042    6988 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 11:20:27.713449    6988 system_pods.go:86] 17 kube-system pods found
	I0318 11:20:27.713449    6988 system_pods.go:89] "coredns-5dd5756b68-jsf9x" [05681724-a32a-40c0-9f26-1c1eb9dffb65] Running
	I0318 11:20:27.713449    6988 system_pods.go:89] "coredns-5dd5756b68-wvh6v" [843ee0ec-fcfd-4763-8c92-acfe93bec900] Running
	I0318 11:20:27.713449    6988 system_pods.go:89] "etcd-ha-606900" [ed704c6d-aba3-496c-9988-c9f86218f1b4] Running
	I0318 11:20:27.713449    6988 system_pods.go:89] "etcd-ha-606900-m02" [a453b1e7-143c-4ea7-a1f4-f6dc6f8aa0b8] Running
	I0318 11:20:27.713449    6988 system_pods.go:89] "kindnet-8977g" [97e55124-90c8-4cda-854c-ee1059fafdac] Running
	I0318 11:20:27.713449    6988 system_pods.go:89] "kindnet-b68s4" [d2b7c03a-1303-4e1d-bf2b-2975716685d6] Running
	I0318 11:20:27.713449    6988 system_pods.go:89] "kube-apiserver-ha-606900" [90f9b505-a404-4227-8a93-8d74ab235009] Running
	I0318 11:20:27.713449    6988 system_pods.go:89] "kube-apiserver-ha-606900-m02" [b3373a21-b66f-42c9-a088-97e3a86cd9fd] Running
	I0318 11:20:27.713449    6988 system_pods.go:89] "kube-controller-manager-ha-606900" [d3660558-d0d0-430f-baeb-912cef1a751f] Running
	I0318 11:20:27.713449    6988 system_pods.go:89] "kube-controller-manager-ha-606900-m02" [93c8139a-db05-4492-a62d-13ecabdadab6] Running
	I0318 11:20:27.713449    6988 system_pods.go:89] "kube-proxy-fk4wg" [3b8fe48c-5035-4e97-9a79-73907e53d2ef] Running
	I0318 11:20:27.713449    6988 system_pods.go:89] "kube-proxy-s9lzf" [c0ba2c37-0dea-43c1-b2d4-ce36b6f6e9ff] Running
	I0318 11:20:27.713449    6988 system_pods.go:89] "kube-scheduler-ha-606900" [6efc4fea-f6fe-4057-96b0-fd62ba3aba5d] Running
	I0318 11:20:27.713449    6988 system_pods.go:89] "kube-scheduler-ha-606900-m02" [f1646aeb-90ea-46f7-a0f9-28b3b68f341c] Running
	I0318 11:20:27.713449    6988 system_pods.go:89] "kube-vip-ha-606900" [540ec4bc-f9bc-4710-be1e-bb289e8cbea4] Running
	I0318 11:20:27.713449    6988 system_pods.go:89] "kube-vip-ha-606900-m02" [9063c185-9922-4ca7-82df-34db3af5f0be] Running
	I0318 11:20:27.713449    6988 system_pods.go:89] "storage-provisioner" [d03b3748-8b89-4a55-9e0e-871a5b79532f] Running
	I0318 11:20:27.713449    6988 system_pods.go:126] duration metric: took 205.9906ms to wait for k8s-apps to be running ...
	I0318 11:20:27.713449    6988 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 11:20:27.724669    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 11:20:27.753246    6988 system_svc.go:56] duration metric: took 39.7973ms WaitForService to wait for kubelet
	I0318 11:20:27.753246    6988 kubeadm.go:576] duration metric: took 12.8954846s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 11:20:27.753246    6988 node_conditions.go:102] verifying NodePressure condition ...
	I0318 11:20:27.901397    6988 request.go:629] Waited for 147.9291ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes
	I0318 11:20:27.901601    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes
	I0318 11:20:27.901601    6988 round_trippers.go:469] Request Headers:
	I0318 11:20:27.901601    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:20:27.901601    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:20:27.906444    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:20:27.908161    6988 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 11:20:27.908161    6988 node_conditions.go:123] node cpu capacity is 2
	I0318 11:20:27.908161    6988 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 11:20:27.908161    6988 node_conditions.go:123] node cpu capacity is 2
	I0318 11:20:27.908161    6988 node_conditions.go:105] duration metric: took 154.9138ms to run NodePressure ...
	I0318 11:20:27.908161    6988 start.go:240] waiting for startup goroutines ...
	I0318 11:20:27.908161    6988 start.go:254] writing updated cluster config ...
	I0318 11:20:27.911842    6988 out.go:177] 
	I0318 11:20:27.927754    6988 config.go:182] Loaded profile config "ha-606900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:20:27.927754    6988 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\config.json ...
	I0318 11:20:27.933717    6988 out.go:177] * Starting "ha-606900-m03" control-plane node in "ha-606900" cluster
	I0318 11:20:27.938223    6988 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 11:20:27.938223    6988 cache.go:56] Caching tarball of preloaded images
	I0318 11:20:27.938223    6988 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0318 11:20:27.938863    6988 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 11:20:27.938863    6988 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\config.json ...
	I0318 11:20:27.943429    6988 start.go:360] acquireMachinesLock for ha-606900-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 11:20:27.944088    6988 start.go:364] duration metric: took 122.3µs to acquireMachinesLock for "ha-606900-m03"
	I0318 11:20:27.944268    6988 start.go:93] Provisioning new machine with config: &{Name:ha-606900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.28.4 ClusterName:ha-606900 Namespace:default APIServerHAVIP:172.25.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.148.74 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.148.106 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 11:20:27.944268    6988 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0318 11:20:27.948449    6988 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 11:20:27.948502    6988 start.go:159] libmachine.API.Create for "ha-606900" (driver="hyperv")
	I0318 11:20:27.948502    6988 client.go:168] LocalClient.Create starting
	I0318 11:20:27.949096    6988 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0318 11:20:27.949096    6988 main.go:141] libmachine: Decoding PEM data...
	I0318 11:20:27.949096    6988 main.go:141] libmachine: Parsing certificate...
	I0318 11:20:27.949783    6988 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0318 11:20:27.949987    6988 main.go:141] libmachine: Decoding PEM data...
	I0318 11:20:27.950020    6988 main.go:141] libmachine: Parsing certificate...
	I0318 11:20:27.950162    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0318 11:20:29.923571    6988 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0318 11:20:29.923571    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:20:29.923819    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0318 11:20:31.723463    6988 main.go:141] libmachine: [stdout =====>] : False
	
	I0318 11:20:31.723463    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:20:31.723463    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0318 11:20:33.274643    6988 main.go:141] libmachine: [stdout =====>] : True
	
	I0318 11:20:33.275646    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:20:33.276032    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0318 11:20:37.189312    6988 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0318 11:20:37.189312    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:20:37.192268    6988 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0318 11:20:37.647554    6988 main.go:141] libmachine: Creating SSH key...
	I0318 11:20:38.036701    6988 main.go:141] libmachine: Creating VM...
	I0318 11:20:38.036701    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0318 11:20:41.062613    6988 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0318 11:20:41.063650    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:20:41.063721    6988 main.go:141] libmachine: Using switch "Default Switch"
	I0318 11:20:41.063721    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0318 11:20:42.952451    6988 main.go:141] libmachine: [stdout =====>] : True
	
	I0318 11:20:42.952451    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:20:42.952634    6988 main.go:141] libmachine: Creating VHD
	I0318 11:20:42.952722    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0318 11:20:46.899827    6988 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 3309E719-3820-4D3C-8654-837596809030
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0318 11:20:46.899827    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:20:46.899827    6988 main.go:141] libmachine: Writing magic tar header
	I0318 11:20:46.899827    6988 main.go:141] libmachine: Writing SSH key tar header
	I0318 11:20:46.909833    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0318 11:20:50.169729    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:20:50.169875    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:20:50.169875    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m03\disk.vhd' -SizeBytes 20000MB
	I0318 11:20:52.826935    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:20:52.826935    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:20:52.827050    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-606900-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0318 11:20:56.640924    6988 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-606900-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0318 11:20:56.641023    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:20:56.641129    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-606900-m03 -DynamicMemoryEnabled $false
	I0318 11:20:58.963906    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:20:58.963975    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:20:58.963975    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-606900-m03 -Count 2
	I0318 11:21:01.231007    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:21:01.231007    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:21:01.231007    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-606900-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m03\boot2docker.iso'
	I0318 11:21:03.922032    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:21:03.922915    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:21:03.923035    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-606900-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m03\disk.vhd'
	I0318 11:21:06.697476    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:21:06.697476    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:21:06.697476    6988 main.go:141] libmachine: Starting VM...
	I0318 11:21:06.698503    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-606900-m03
	I0318 11:21:09.923001    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:21:09.923001    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:21:09.923001    6988 main.go:141] libmachine: Waiting for host to start...
	I0318 11:21:09.923001    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m03 ).state
	I0318 11:21:12.270508    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:21:12.270508    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:21:12.270508    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:21:14.894863    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:21:14.894904    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:21:15.900157    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m03 ).state
	I0318 11:21:18.151958    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:21:18.151958    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:21:18.151958    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:21:20.775252    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:21:20.775252    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:21:21.780155    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m03 ).state
	I0318 11:21:24.049999    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:21:24.049999    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:21:24.049999    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:21:26.702930    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:21:26.702930    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:21:27.705184    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m03 ).state
	I0318 11:21:29.968246    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:21:29.968463    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:21:29.968544    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:21:32.575603    6988 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:21:32.575603    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:21:33.578210    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m03 ).state
	I0318 11:21:35.897937    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:21:35.897937    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:21:35.898208    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:21:38.582343    6988 main.go:141] libmachine: [stdout =====>] : 172.25.158.182
	
	I0318 11:21:38.582343    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:21:38.583009    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m03 ).state
	I0318 11:21:40.821436    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:21:40.821436    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:21:40.821436    6988 machine.go:94] provisionDockerMachine start ...
	I0318 11:21:40.821436    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m03 ).state
	I0318 11:21:43.083303    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:21:43.083303    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:21:43.083303    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:21:45.826412    6988 main.go:141] libmachine: [stdout =====>] : 172.25.158.182
	
	I0318 11:21:45.826412    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:21:45.832315    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:21:45.844077    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.158.182 22 <nil> <nil>}
	I0318 11:21:45.844077    6988 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 11:21:45.979980    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 11:21:45.979980    6988 buildroot.go:166] provisioning hostname "ha-606900-m03"
	I0318 11:21:45.980328    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m03 ).state
	I0318 11:21:48.203719    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:21:48.204715    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:21:48.204770    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:21:50.873609    6988 main.go:141] libmachine: [stdout =====>] : 172.25.158.182
	
	I0318 11:21:50.873609    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:21:50.879404    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:21:50.880149    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.158.182 22 <nil> <nil>}
	I0318 11:21:50.880149    6988 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-606900-m03 && echo "ha-606900-m03" | sudo tee /etc/hostname
	I0318 11:21:51.036398    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-606900-m03
	
	I0318 11:21:51.036398    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m03 ).state
	I0318 11:21:53.251539    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:21:53.251689    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:21:53.251767    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:21:55.896601    6988 main.go:141] libmachine: [stdout =====>] : 172.25.158.182
	
	I0318 11:21:55.896601    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:21:55.906178    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:21:55.906178    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.158.182 22 <nil> <nil>}
	I0318 11:21:55.906178    6988 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-606900-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-606900-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-606900-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 11:21:56.046084    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 11:21:56.046084    6988 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0318 11:21:56.046084    6988 buildroot.go:174] setting up certificates
	I0318 11:21:56.046084    6988 provision.go:84] configureAuth start
	I0318 11:21:56.046084    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m03 ).state
	I0318 11:21:58.230359    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:21:58.230359    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:21:58.230457    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:22:00.886157    6988 main.go:141] libmachine: [stdout =====>] : 172.25.158.182
	
	I0318 11:22:00.886231    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:00.886337    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m03 ).state
	I0318 11:22:03.118631    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:22:03.118631    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:03.118631    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:22:05.789457    6988 main.go:141] libmachine: [stdout =====>] : 172.25.158.182
	
	I0318 11:22:05.789457    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:05.789457    6988 provision.go:143] copyHostCerts
	I0318 11:22:05.790514    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0318 11:22:05.790748    6988 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0318 11:22:05.790853    6988 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0318 11:22:05.791137    6988 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0318 11:22:05.792194    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0318 11:22:05.792415    6988 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0318 11:22:05.792415    6988 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0318 11:22:05.792841    6988 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0318 11:22:05.793953    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0318 11:22:05.794226    6988 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0318 11:22:05.794226    6988 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0318 11:22:05.794679    6988 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0318 11:22:05.795757    6988 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-606900-m03 san=[127.0.0.1 172.25.158.182 ha-606900-m03 localhost minikube]
	I0318 11:22:06.001932    6988 provision.go:177] copyRemoteCerts
	I0318 11:22:06.014870    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 11:22:06.015006    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m03 ).state
	I0318 11:22:08.184064    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:22:08.184252    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:08.184252    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:22:10.832057    6988 main.go:141] libmachine: [stdout =====>] : 172.25.158.182
	
	I0318 11:22:10.832057    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:10.832255    6988 sshutil.go:53] new ssh client: &{IP:172.25.158.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m03\id_rsa Username:docker}
	I0318 11:22:10.951013    6988 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9360071s)
	I0318 11:22:10.951047    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0318 11:22:10.951192    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 11:22:10.999541    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0318 11:22:10.999946    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0318 11:22:11.052041    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0318 11:22:11.052041    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 11:22:11.104327    6988 provision.go:87] duration metric: took 15.0580927s to configureAuth
	I0318 11:22:11.104538    6988 buildroot.go:189] setting minikube options for container-runtime
	I0318 11:22:11.105309    6988 config.go:182] Loaded profile config "ha-606900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:22:11.105359    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m03 ).state
	I0318 11:22:13.347821    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:22:13.347821    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:13.347896    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:22:16.030698    6988 main.go:141] libmachine: [stdout =====>] : 172.25.158.182
	
	I0318 11:22:16.031465    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:16.039118    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:22:16.039524    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.158.182 22 <nil> <nil>}
	I0318 11:22:16.039524    6988 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0318 11:22:16.167931    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0318 11:22:16.167931    6988 buildroot.go:70] root file system type: tmpfs
	I0318 11:22:16.168258    6988 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0318 11:22:16.168258    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m03 ).state
	I0318 11:22:18.403909    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:22:18.404131    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:18.404131    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:22:21.013644    6988 main.go:141] libmachine: [stdout =====>] : 172.25.158.182
	
	I0318 11:22:21.013644    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:21.021130    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:22:21.021862    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.158.182 22 <nil> <nil>}
	I0318 11:22:21.021862    6988 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.25.148.74"
	Environment="NO_PROXY=172.25.148.74,172.25.148.106"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0318 11:22:21.179788    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.25.148.74
	Environment=NO_PROXY=172.25.148.74,172.25.148.106
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0318 11:22:21.179847    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m03 ).state
	I0318 11:22:23.397822    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:22:23.398018    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:23.398018    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:22:26.040210    6988 main.go:141] libmachine: [stdout =====>] : 172.25.158.182
	
	I0318 11:22:26.040210    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:26.048683    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:22:26.049557    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.158.182 22 <nil> <nil>}
	I0318 11:22:26.049557    6988 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0318 11:22:28.287260    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0318 11:22:28.287260    6988 machine.go:97] duration metric: took 47.4655249s to provisionDockerMachine
	I0318 11:22:28.287260    6988 client.go:171] duration metric: took 2m0.3380004s to LocalClient.Create
	I0318 11:22:28.287260    6988 start.go:167] duration metric: took 2m0.3380004s to libmachine.API.Create "ha-606900"
	I0318 11:22:28.287260    6988 start.go:293] postStartSetup for "ha-606900-m03" (driver="hyperv")
	I0318 11:22:28.287260    6988 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 11:22:28.300681    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 11:22:28.300681    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m03 ).state
	I0318 11:22:30.492048    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:22:30.492048    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:30.492318    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:22:33.181857    6988 main.go:141] libmachine: [stdout =====>] : 172.25.158.182
	
	I0318 11:22:33.182441    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:33.182964    6988 sshutil.go:53] new ssh client: &{IP:172.25.158.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m03\id_rsa Username:docker}
	I0318 11:22:33.287479    6988 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9867662s)
	I0318 11:22:33.299948    6988 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 11:22:33.307289    6988 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 11:22:33.307289    6988 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0318 11:22:33.307732    6988 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0318 11:22:33.308752    6988 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem -> 91202.pem in /etc/ssl/certs
	I0318 11:22:33.308811    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem -> /etc/ssl/certs/91202.pem
	I0318 11:22:33.321318    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 11:22:33.339157    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem --> /etc/ssl/certs/91202.pem (1708 bytes)
	I0318 11:22:33.391038    6988 start.go:296] duration metric: took 5.1037457s for postStartSetup
	I0318 11:22:33.393884    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m03 ).state
	I0318 11:22:35.618187    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:22:35.618187    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:35.618469    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:22:38.308584    6988 main.go:141] libmachine: [stdout =====>] : 172.25.158.182
	
	I0318 11:22:38.308767    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:38.308820    6988 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\config.json ...
	I0318 11:22:38.311584    6988 start.go:128] duration metric: took 2m10.3664941s to createHost
	I0318 11:22:38.311584    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m03 ).state
	I0318 11:22:40.531065    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:22:40.531065    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:40.531065    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:22:43.197980    6988 main.go:141] libmachine: [stdout =====>] : 172.25.158.182
	
	I0318 11:22:43.197980    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:43.204318    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:22:43.205043    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.158.182 22 <nil> <nil>}
	I0318 11:22:43.205043    6988 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 11:22:43.336205    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710760963.328851647
	
	I0318 11:22:43.336328    6988 fix.go:216] guest clock: 1710760963.328851647
	I0318 11:22:43.336328    6988 fix.go:229] Guest: 2024-03-18 11:22:43.328851647 +0000 UTC Remote: 2024-03-18 11:22:38.3115843 +0000 UTC m=+602.662131001 (delta=5.017267347s)
	I0318 11:22:43.336457    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m03 ).state
	I0318 11:22:45.587494    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:22:45.587494    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:45.588017    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:22:48.347657    6988 main.go:141] libmachine: [stdout =====>] : 172.25.158.182
	
	I0318 11:22:48.347657    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:48.353632    6988 main.go:141] libmachine: Using SSH client type: native
	I0318 11:22:48.354500    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.158.182 22 <nil> <nil>}
	I0318 11:22:48.354500    6988 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1710760963
	I0318 11:22:48.504095    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Mar 18 11:22:43 UTC 2024
	
	I0318 11:22:48.504095    6988 fix.go:236] clock set: Mon Mar 18 11:22:43 UTC 2024
	 (err=<nil>)
	I0318 11:22:48.504095    6988 start.go:83] releasing machines lock for "ha-606900-m03", held for 2m20.5590698s
	I0318 11:22:48.504095    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m03 ).state
	I0318 11:22:50.718328    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:22:50.718973    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:50.718973    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:22:53.368257    6988 main.go:141] libmachine: [stdout =====>] : 172.25.158.182
	
	I0318 11:22:53.368257    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:53.381342    6988 out.go:177] * Found network options:
	I0318 11:22:53.396915    6988 out.go:177]   - NO_PROXY=172.25.148.74,172.25.148.106
	W0318 11:22:53.400156    6988 proxy.go:119] fail to check proxy env: Error ip not in block
	W0318 11:22:53.400156    6988 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 11:22:53.404740    6988 out.go:177]   - NO_PROXY=172.25.148.74,172.25.148.106
	W0318 11:22:53.407205    6988 proxy.go:119] fail to check proxy env: Error ip not in block
	W0318 11:22:53.407205    6988 proxy.go:119] fail to check proxy env: Error ip not in block
	W0318 11:22:53.408950    6988 proxy.go:119] fail to check proxy env: Error ip not in block
	W0318 11:22:53.408950    6988 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 11:22:53.410951    6988 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 11:22:53.410951    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m03 ).state
	I0318 11:22:53.422278    6988 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0318 11:22:53.422278    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900-m03 ).state
	I0318 11:22:55.715110    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:22:55.715110    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:55.715110    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:22:55.715790    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:55.715967    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:22:55.715967    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:22:58.447605    6988 main.go:141] libmachine: [stdout =====>] : 172.25.158.182
	
	I0318 11:22:58.447679    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:58.448510    6988 sshutil.go:53] new ssh client: &{IP:172.25.158.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m03\id_rsa Username:docker}
	I0318 11:22:58.476125    6988 main.go:141] libmachine: [stdout =====>] : 172.25.158.182
	
	I0318 11:22:58.476125    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:22:58.476861    6988 sshutil.go:53] new ssh client: &{IP:172.25.158.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900-m03\id_rsa Username:docker}
	I0318 11:22:58.542547    6988 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1202368s)
	W0318 11:22:58.542668    6988 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 11:22:58.555611    6988 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 11:22:58.669917    6988 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 11:22:58.669917    6988 start.go:494] detecting cgroup driver to use...
	I0318 11:22:58.669917    6988 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2589336s)
	I0318 11:22:58.669917    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 11:22:58.721676    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0318 11:22:58.756085    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0318 11:22:58.776139    6988 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0318 11:22:58.787613    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0318 11:22:58.822260    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 11:22:58.855740    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0318 11:22:58.889132    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 11:22:58.922883    6988 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 11:22:58.957350    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0318 11:22:58.991608    6988 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 11:22:59.023351    6988 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 11:22:59.053660    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:22:59.257483    6988 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0318 11:22:59.292521    6988 start.go:494] detecting cgroup driver to use...
	I0318 11:22:59.305502    6988 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0318 11:22:59.344972    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 11:22:59.380878    6988 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 11:22:59.426694    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 11:22:59.467523    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 11:22:59.507809    6988 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0318 11:22:59.571519    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 11:22:59.601102    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 11:22:59.660964    6988 ssh_runner.go:195] Run: which cri-dockerd
	I0318 11:22:59.682083    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0318 11:22:59.701838    6988 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0318 11:22:59.749642    6988 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0318 11:22:59.959809    6988 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0318 11:23:00.181846    6988 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0318 11:23:00.181846    6988 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0318 11:23:00.231607    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:23:00.456248    6988 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 11:23:03.009316    6988 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5530514s)
	I0318 11:23:03.022463    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0318 11:23:03.063324    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 11:23:03.105684    6988 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0318 11:23:03.344347    6988 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0318 11:23:03.562510    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:23:03.772708    6988 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0318 11:23:03.820045    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 11:23:03.856020    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:23:04.069467    6988 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0318 11:23:04.182573    6988 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0318 11:23:04.197024    6988 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0318 11:23:04.205719    6988 start.go:562] Will wait 60s for crictl version
	I0318 11:23:04.218716    6988 ssh_runner.go:195] Run: which crictl
	I0318 11:23:04.238554    6988 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 11:23:04.318304    6988 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.4
	RuntimeApiVersion:  v1
	I0318 11:23:04.328277    6988 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 11:23:04.373732    6988 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 11:23:04.412844    6988 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	I0318 11:23:04.416453    6988 out.go:177]   - env NO_PROXY=172.25.148.74
	I0318 11:23:04.418480    6988 out.go:177]   - env NO_PROXY=172.25.148.74,172.25.148.106
	I0318 11:23:04.421170    6988 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0318 11:23:04.427051    6988 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0318 11:23:04.427248    6988 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0318 11:23:04.427248    6988 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0318 11:23:04.427248    6988 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:ae:0d:2c Flags:up|broadcast|multicast|running}
	I0318 11:23:04.430000    6988 ip.go:210] interface addr: fe80::f8a6:d6b6:cc4:1ba0/64
	I0318 11:23:04.430000    6988 ip.go:210] interface addr: 172.25.144.1/20
	I0318 11:23:04.441575    6988 ssh_runner.go:195] Run: grep 172.25.144.1	host.minikube.internal$ /etc/hosts
	I0318 11:23:04.449841    6988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 11:23:04.472895    6988 mustload.go:65] Loading cluster: ha-606900
	I0318 11:23:04.473797    6988 config.go:182] Loaded profile config "ha-606900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:23:04.473850    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:23:06.705787    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:23:06.705787    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:23:06.706169    6988 host.go:66] Checking if "ha-606900" exists ...
	I0318 11:23:06.706982    6988 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900 for IP: 172.25.158.182
	I0318 11:23:06.707036    6988 certs.go:194] generating shared ca certs ...
	I0318 11:23:06.707036    6988 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:23:06.707647    6988 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0318 11:23:06.708029    6988 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0318 11:23:06.708209    6988 certs.go:256] generating profile certs ...
	I0318 11:23:06.708836    6988 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\client.key
	I0318 11:23:06.708942    6988 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.key.f0fc3bf2
	I0318 11:23:06.709339    6988 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.crt.f0fc3bf2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.148.74 172.25.148.106 172.25.158.182 172.25.159.254]
	I0318 11:23:07.067204    6988 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.crt.f0fc3bf2 ...
	I0318 11:23:07.067204    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.crt.f0fc3bf2: {Name:mke127e03e18b4156cbb4926f3348eeff6a27201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:23:07.068565    6988 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.key.f0fc3bf2 ...
	I0318 11:23:07.068565    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.key.f0fc3bf2: {Name:mk15eea5ec06174a1c0fceb7d6b416abc057f9eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:23:07.069335    6988 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.crt.f0fc3bf2 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.crt
	I0318 11:23:07.081976    6988 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.key.f0fc3bf2 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.key
	I0318 11:23:07.083863    6988 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\proxy-client.key
	I0318 11:23:07.083863    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 11:23:07.084086    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0318 11:23:07.084246    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 11:23:07.084304    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 11:23:07.084544    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 11:23:07.084683    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 11:23:07.084854    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 11:23:07.085111    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 11:23:07.085504    6988 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9120.pem (1338 bytes)
	W0318 11:23:07.085504    6988 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9120_empty.pem, impossibly tiny 0 bytes
	I0318 11:23:07.085504    6988 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0318 11:23:07.086266    6988 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0318 11:23:07.086540    6988 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0318 11:23:07.086808    6988 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0318 11:23:07.087184    6988 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem (1708 bytes)
	I0318 11:23:07.087645    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9120.pem -> /usr/share/ca-certificates/9120.pem
	I0318 11:23:07.087869    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem -> /usr/share/ca-certificates/91202.pem
	I0318 11:23:07.087990    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:23:07.088251    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:23:09.300730    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:23:09.300730    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:23:09.301496    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:23:11.958320    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:23:11.958826    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:23:11.959476    6988 sshutil.go:53] new ssh client: &{IP:172.25.148.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900\id_rsa Username:docker}
	I0318 11:23:12.065123    6988 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0318 11:23:12.073902    6988 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0318 11:23:12.108498    6988 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0318 11:23:12.117038    6988 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0318 11:23:12.152206    6988 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0318 11:23:12.159997    6988 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0318 11:23:12.194189    6988 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0318 11:23:12.200998    6988 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0318 11:23:12.239228    6988 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0318 11:23:12.249008    6988 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0318 11:23:12.284817    6988 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0318 11:23:12.292056    6988 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0318 11:23:12.313529    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 11:23:12.363711    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 11:23:12.411542    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 11:23:12.467415    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 11:23:12.517956    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0318 11:23:12.569831    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 11:23:12.622903    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 11:23:12.673723    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-606900\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 11:23:12.724133    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9120.pem --> /usr/share/ca-certificates/9120.pem (1338 bytes)
	I0318 11:23:12.777685    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem --> /usr/share/ca-certificates/91202.pem (1708 bytes)
	I0318 11:23:12.827364    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 11:23:12.878421    6988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0318 11:23:12.919672    6988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0318 11:23:12.955265    6988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0318 11:23:12.990224    6988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0318 11:23:13.028086    6988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0318 11:23:13.063599    6988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0318 11:23:13.099173    6988 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0318 11:23:13.147686    6988 ssh_runner.go:195] Run: openssl version
	I0318 11:23:13.171300    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9120.pem && ln -fs /usr/share/ca-certificates/9120.pem /etc/ssl/certs/9120.pem"
	I0318 11:23:13.209733    6988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9120.pem
	I0318 11:23:13.218169    6988 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 10:53 /usr/share/ca-certificates/9120.pem
	I0318 11:23:13.231916    6988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9120.pem
	I0318 11:23:13.262934    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9120.pem /etc/ssl/certs/51391683.0"
	I0318 11:23:13.296979    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/91202.pem && ln -fs /usr/share/ca-certificates/91202.pem /etc/ssl/certs/91202.pem"
	I0318 11:23:13.335126    6988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91202.pem
	I0318 11:23:13.342728    6988 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 10:53 /usr/share/ca-certificates/91202.pem
	I0318 11:23:13.357209    6988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91202.pem
	I0318 11:23:13.379151    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/91202.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 11:23:13.412909    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 11:23:13.450314    6988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:23:13.458373    6988 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:23:13.473287    6988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:23:13.495188    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 11:23:13.532701    6988 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 11:23:13.540759    6988 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 11:23:13.540985    6988 kubeadm.go:928] updating node {m03 172.25.158.182 8443 v1.28.4 docker true true} ...
	I0318 11:23:13.541417    6988 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-606900-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.158.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-606900 Namespace:default APIServerHAVIP:172.25.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 11:23:13.541497    6988 kube-vip.go:111] generating kube-vip config ...
	I0318 11:23:13.552707    6988 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0318 11:23:13.584416    6988 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0318 11:23:13.584573    6988 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.25.159.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0318 11:23:13.598170    6988 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 11:23:13.614842    6988 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0318 11:23:13.628544    6988 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0318 11:23:13.649118    6988 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256
	I0318 11:23:13.649118    6988 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0318 11:23:13.649329    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0318 11:23:13.649329    6988 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256
	I0318 11:23:13.649329    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0318 11:23:13.664982    6988 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0318 11:23:13.664982    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 11:23:13.666223    6988 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0318 11:23:13.678567    6988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0318 11:23:13.678742    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0318 11:23:13.711300    6988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0318 11:23:13.711381    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0318 11:23:13.711656    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0318 11:23:13.725685    6988 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0318 11:23:13.810979    6988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0318 11:23:13.811109    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0318 11:23:15.127301    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0318 11:23:15.149209    6988 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0318 11:23:15.185145    6988 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 11:23:15.226796    6988 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0318 11:23:15.288351    6988 ssh_runner.go:195] Run: grep 172.25.159.254	control-plane.minikube.internal$ /etc/hosts
	I0318 11:23:15.296743    6988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.159.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 11:23:15.338387    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:23:15.577185    6988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 11:23:15.615011    6988 host.go:66] Checking if "ha-606900" exists ...
	I0318 11:23:15.615790    6988 start.go:316] joinCluster: &{Name:ha-606900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-606900 Namespace:default APIServerHAVIP:172.25.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.148.74 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.148.106 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.25.158.182 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 11:23:15.615963    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0318 11:23:15.616066    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-606900 ).state
	I0318 11:23:17.860502    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:23:17.860713    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:23:17.860713    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-606900 ).networkadapters[0]).ipaddresses[0]
	I0318 11:23:20.521644    6988 main.go:141] libmachine: [stdout =====>] : 172.25.148.74
	
	I0318 11:23:20.522396    6988 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:23:20.523061    6988 sshutil.go:53] new ssh client: &{IP:172.25.148.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-606900\id_rsa Username:docker}
	I0318 11:23:20.765196    6988 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (5.1491515s)
	I0318 11:23:20.765196    6988 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.25.158.182 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 11:23:20.766130    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8p5tel.vtprgb1klrtlzuvh --discovery-token-ca-cert-hash sha256:1315b336657f971045d436062c4002c5bfe51c3e72afc075449943f75abc0cef --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-606900-m03 --control-plane --apiserver-advertise-address=172.25.158.182 --apiserver-bind-port=8443"
	I0318 11:24:08.534489    6988 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8p5tel.vtprgb1klrtlzuvh --discovery-token-ca-cert-hash sha256:1315b336657f971045d436062c4002c5bfe51c3e72afc075449943f75abc0cef --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-606900-m03 --control-plane --apiserver-advertise-address=172.25.158.182 --apiserver-bind-port=8443": (47.7680595s)
	I0318 11:24:08.534489    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0318 11:24:09.436469    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-606900-m03 minikube.k8s.io/updated_at=2024_03_18T11_24_09_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd minikube.k8s.io/name=ha-606900 minikube.k8s.io/primary=false
	I0318 11:24:09.613944    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-606900-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0318 11:24:10.072204    6988 start.go:318] duration metric: took 54.4561284s to joinCluster
	I0318 11:24:10.072359    6988 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.25.158.182 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 11:24:10.076914    6988 out.go:177] * Verifying Kubernetes components...
	I0318 11:24:10.073395    6988 config.go:182] Loaded profile config "ha-606900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:24:10.096188    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:24:10.528110    6988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 11:24:10.577111    6988 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0318 11:24:10.578254    6988 kapi.go:59] client config for ha-606900: &rest.Config{Host:"https://172.25.159.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-606900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-606900\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x226b2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0318 11:24:10.578254    6988 kubeadm.go:477] Overriding stale ClientConfig host https://172.25.159.254:8443 with https://172.25.148.74:8443
	I0318 11:24:10.579136    6988 node_ready.go:35] waiting up to 6m0s for node "ha-606900-m03" to be "Ready" ...
	I0318 11:24:10.579136    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:10.579136    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:10.579136    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:10.579136    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:10.605538    6988 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0318 11:24:11.088130    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:11.088130    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:11.088386    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:11.088386    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:11.093677    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:11.579713    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:11.579785    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:11.579785    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:11.579785    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:11.586517    6988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 11:24:12.086411    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:12.086411    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:12.086411    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:12.086411    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:12.097243    6988 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0318 11:24:12.595095    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:12.595095    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:12.595095    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:12.595095    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:12.601867    6988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 11:24:12.602213    6988 node_ready.go:53] node "ha-606900-m03" has status "Ready":"False"
	I0318 11:24:13.086880    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:13.086965    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:13.086965    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:13.086965    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:13.094809    6988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 11:24:13.591099    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:13.591099    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:13.591099    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:13.591099    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:13.595668    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:24:14.082011    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:14.082011    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:14.082531    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:14.082531    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:14.086919    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:24:14.592474    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:14.592474    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:14.592759    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:14.592791    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:14.597584    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:24:15.084685    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:15.085038    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:15.085141    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:15.085141    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:15.092416    6988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 11:24:15.093222    6988 node_ready.go:53] node "ha-606900-m03" has status "Ready":"False"
	I0318 11:24:15.590205    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:15.590276    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:15.590276    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:15.590276    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:15.595893    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:16.080119    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:16.080119    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:16.080119    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:16.080119    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:16.166988    6988 round_trippers.go:574] Response Status: 200 OK in 86 milliseconds
	I0318 11:24:16.586550    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:16.586550    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:16.586550    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:16.586550    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:16.591759    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:17.094918    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:17.094918    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:17.094918    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:17.094918    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:17.101139    6988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 11:24:17.102133    6988 node_ready.go:53] node "ha-606900-m03" has status "Ready":"False"
	I0318 11:24:17.587783    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:17.588005    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:17.588005    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:17.588005    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:17.593675    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:18.081249    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:18.081508    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:18.081508    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:18.081508    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:18.086859    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:18.591204    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:18.591385    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:18.591385    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:18.591385    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:18.597053    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:19.085534    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:19.085534    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:19.085534    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:19.085534    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:19.091223    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:19.579564    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:19.579809    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:19.579809    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:19.579809    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:19.586021    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:19.587117    6988 node_ready.go:49] node "ha-606900-m03" has status "Ready":"True"
	I0318 11:24:19.587117    6988 node_ready.go:38] duration metric: took 9.0079248s for node "ha-606900-m03" to be "Ready" ...
	I0318 11:24:19.587183    6988 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 11:24:19.587318    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods
	I0318 11:24:19.587318    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:19.587318    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:19.587318    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:19.597949    6988 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0318 11:24:19.611894    6988 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-jsf9x" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:19.611894    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-jsf9x
	I0318 11:24:19.611894    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:19.611894    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:19.611894    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:19.617929    6988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 11:24:19.618544    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900
	I0318 11:24:19.618544    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:19.618544    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:19.618544    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:19.623366    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:24:19.623656    6988 pod_ready.go:92] pod "coredns-5dd5756b68-jsf9x" in "kube-system" namespace has status "Ready":"True"
	I0318 11:24:19.623656    6988 pod_ready.go:81] duration metric: took 11.7622ms for pod "coredns-5dd5756b68-jsf9x" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:19.623656    6988 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-wvh6v" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:19.624241    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wvh6v
	I0318 11:24:19.624241    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:19.624322    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:19.624322    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:19.628286    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 11:24:19.629687    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900
	I0318 11:24:19.629687    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:19.629773    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:19.629773    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:19.634072    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:24:19.634893    6988 pod_ready.go:92] pod "coredns-5dd5756b68-wvh6v" in "kube-system" namespace has status "Ready":"True"
	I0318 11:24:19.634893    6988 pod_ready.go:81] duration metric: took 11.2365ms for pod "coredns-5dd5756b68-wvh6v" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:19.634951    6988 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-606900" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:19.634997    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-606900
	I0318 11:24:19.635096    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:19.635096    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:19.635138    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:19.639912    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:24:19.640575    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900
	I0318 11:24:19.640575    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:19.640575    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:19.640575    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:19.645397    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:24:19.646046    6988 pod_ready.go:92] pod "etcd-ha-606900" in "kube-system" namespace has status "Ready":"True"
	I0318 11:24:19.646046    6988 pod_ready.go:81] duration metric: took 11.0957ms for pod "etcd-ha-606900" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:19.646046    6988 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-606900-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:19.646163    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-606900-m02
	I0318 11:24:19.646260    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:19.646260    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:19.646260    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:19.650852    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:24:19.651674    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:24:19.651674    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:19.651674    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:19.651674    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:19.657482    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:19.658884    6988 pod_ready.go:92] pod "etcd-ha-606900-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 11:24:19.658884    6988 pod_ready.go:81] duration metric: took 12.8377ms for pod "etcd-ha-606900-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:19.658884    6988 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-606900-m03" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:19.783310    6988 request.go:629] Waited for 124.3174ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-606900-m03
	I0318 11:24:19.783310    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-606900-m03
	I0318 11:24:19.783310    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:19.783310    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:19.783310    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:19.791132    6988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 11:24:19.984878    6988 request.go:629] Waited for 192.3614ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:19.985094    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:19.985094    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:19.985094    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:19.985094    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:19.990916    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:20.185620    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-606900-m03
	I0318 11:24:20.185705    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:20.185705    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:20.185705    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:20.191252    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:20.387134    6988 request.go:629] Waited for 194.7598ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:20.387339    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:20.387339    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:20.387456    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:20.387456    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:20.392676    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:20.666982    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-606900-m03
	I0318 11:24:20.666982    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:20.666982    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:20.666982    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:20.674738    6988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 11:24:20.790527    6988 request.go:629] Waited for 113.9058ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:20.790583    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:20.790583    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:20.790583    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:20.790583    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:20.798128    6988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 11:24:21.166627    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-606900-m03
	I0318 11:24:21.166716    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:21.166716    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:21.166716    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:21.174258    6988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 11:24:21.181642    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:21.181642    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:21.181642    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:21.181642    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:21.186257    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:24:21.667385    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-606900-m03
	I0318 11:24:21.667385    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:21.667385    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:21.667385    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:21.672860    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:24:21.673871    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:21.673871    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:21.673871    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:21.673871    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:21.678140    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:24:21.679398    6988 pod_ready.go:92] pod "etcd-ha-606900-m03" in "kube-system" namespace has status "Ready":"True"
	I0318 11:24:21.679986    6988 pod_ready.go:81] duration metric: took 2.0210885s for pod "etcd-ha-606900-m03" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:21.679986    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-606900" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:21.793593    6988 request.go:629] Waited for 113.607ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-606900
	I0318 11:24:21.793593    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-606900
	I0318 11:24:21.793593    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:21.793593    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:21.793931    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:21.798280    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:24:21.982397    6988 request.go:629] Waited for 182.6611ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900
	I0318 11:24:21.982837    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900
	I0318 11:24:21.982902    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:21.982929    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:21.982929    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:21.988501    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:24:21.989244    6988 pod_ready.go:92] pod "kube-apiserver-ha-606900" in "kube-system" namespace has status "Ready":"True"
	I0318 11:24:21.989302    6988 pod_ready.go:81] duration metric: took 309.2561ms for pod "kube-apiserver-ha-606900" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:21.989302    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-606900-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:22.188565    6988 request.go:629] Waited for 199.1114ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-606900-m02
	I0318 11:24:22.188565    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-606900-m02
	I0318 11:24:22.188565    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:22.188565    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:22.188565    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:22.195195    6988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 11:24:22.393094    6988 request.go:629] Waited for 196.9142ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:24:22.393441    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:24:22.393441    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:22.393441    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:22.393441    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:22.399317    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:22.399922    6988 pod_ready.go:92] pod "kube-apiserver-ha-606900-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 11:24:22.400716    6988 pod_ready.go:81] duration metric: took 411.3576ms for pod "kube-apiserver-ha-606900-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:22.400716    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-606900-m03" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:22.594940    6988 request.go:629] Waited for 193.605ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-606900-m03
	I0318 11:24:22.595028    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-606900-m03
	I0318 11:24:22.595028    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:22.595028    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:22.595028    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:22.599774    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:24:22.781815    6988 request.go:629] Waited for 180.1839ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:22.781886    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:22.781958    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:22.781958    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:22.781958    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:22.786732    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:24:22.788513    6988 pod_ready.go:92] pod "kube-apiserver-ha-606900-m03" in "kube-system" namespace has status "Ready":"True"
	I0318 11:24:22.788513    6988 pod_ready.go:81] duration metric: took 387.7946ms for pod "kube-apiserver-ha-606900-m03" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:22.788571    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-606900" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:22.988567    6988 request.go:629] Waited for 199.7284ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-606900
	I0318 11:24:22.988749    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-606900
	I0318 11:24:22.988749    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:22.988988    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:22.988988    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:22.998357    6988 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0318 11:24:23.195084    6988 request.go:629] Waited for 194.5145ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900
	I0318 11:24:23.195367    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900
	I0318 11:24:23.195367    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:23.195367    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:23.195367    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:23.200714    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:23.202006    6988 pod_ready.go:92] pod "kube-controller-manager-ha-606900" in "kube-system" namespace has status "Ready":"True"
	I0318 11:24:23.202159    6988 pod_ready.go:81] duration metric: took 413.5845ms for pod "kube-controller-manager-ha-606900" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:23.202159    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-606900-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:23.380908    6988 request.go:629] Waited for 178.6364ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-606900-m02
	I0318 11:24:23.381059    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-606900-m02
	I0318 11:24:23.381059    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:23.381059    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:23.381059    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:23.386669    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:23.584065    6988 request.go:629] Waited for 196.3969ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:24:23.584192    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:24:23.584192    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:23.584192    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:23.584389    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:23.591870    6988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 11:24:23.592606    6988 pod_ready.go:92] pod "kube-controller-manager-ha-606900-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 11:24:23.592606    6988 pod_ready.go:81] duration metric: took 390.4447ms for pod "kube-controller-manager-ha-606900-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:23.592606    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-606900-m03" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:23.788994    6988 request.go:629] Waited for 195.6736ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-606900-m03
	I0318 11:24:23.788994    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-606900-m03
	I0318 11:24:23.788994    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:23.788994    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:23.788994    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:23.793601    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:24:23.991514    6988 request.go:629] Waited for 195.752ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:23.992235    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:23.992235    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:23.992235    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:23.992433    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:23.996921    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:24:23.998283    6988 pod_ready.go:92] pod "kube-controller-manager-ha-606900-m03" in "kube-system" namespace has status "Ready":"True"
	I0318 11:24:23.998283    6988 pod_ready.go:81] duration metric: took 405.6746ms for pod "kube-controller-manager-ha-606900-m03" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:23.998283    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cjhcj" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:24.193474    6988 request.go:629] Waited for 195.1895ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cjhcj
	I0318 11:24:24.193906    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cjhcj
	I0318 11:24:24.193906    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:24.193906    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:24.193906    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:24.199452    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:24.379874    6988 request.go:629] Waited for 179.2815ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:24.379874    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:24.379874    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:24.379874    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:24.379874    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:24.384821    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:24:24.386279    6988 pod_ready.go:92] pod "kube-proxy-cjhcj" in "kube-system" namespace has status "Ready":"True"
	I0318 11:24:24.386354    6988 pod_ready.go:81] duration metric: took 388.069ms for pod "kube-proxy-cjhcj" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:24.386354    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fk4wg" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:24.584269    6988 request.go:629] Waited for 197.7966ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fk4wg
	I0318 11:24:24.584269    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fk4wg
	I0318 11:24:24.584269    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:24.584269    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:24.584269    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:24.590307    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:24.788999    6988 request.go:629] Waited for 197.731ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900
	I0318 11:24:24.789115    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900
	I0318 11:24:24.789115    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:24.789115    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:24.789115    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:24.794549    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:24.795391    6988 pod_ready.go:92] pod "kube-proxy-fk4wg" in "kube-system" namespace has status "Ready":"True"
	I0318 11:24:24.795391    6988 pod_ready.go:81] duration metric: took 409.0338ms for pod "kube-proxy-fk4wg" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:24.795450    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s9lzf" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:24.993157    6988 request.go:629] Waited for 197.4438ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s9lzf
	I0318 11:24:24.993157    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s9lzf
	I0318 11:24:24.993157    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:24.993157    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:24.993157    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:24.998337    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:25.195530    6988 request.go:629] Waited for 195.8348ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:24:25.195530    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:24:25.195758    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:25.195758    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:25.195758    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:25.201477    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:25.202484    6988 pod_ready.go:92] pod "kube-proxy-s9lzf" in "kube-system" namespace has status "Ready":"True"
	I0318 11:24:25.202484    6988 pod_ready.go:81] duration metric: took 407.0312ms for pod "kube-proxy-s9lzf" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:25.202484    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-606900" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:25.383033    6988 request.go:629] Waited for 180.3819ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-606900
	I0318 11:24:25.383223    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-606900
	I0318 11:24:25.383327    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:25.383327    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:25.383327    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:25.416217    6988 round_trippers.go:574] Response Status: 200 OK in 32 milliseconds
	I0318 11:24:25.589803    6988 request.go:629] Waited for 172.5306ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900
	I0318 11:24:25.589994    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900
	I0318 11:24:25.589994    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:25.590108    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:25.590108    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:25.595196    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:25.596283    6988 pod_ready.go:92] pod "kube-scheduler-ha-606900" in "kube-system" namespace has status "Ready":"True"
	I0318 11:24:25.596350    6988 pod_ready.go:81] duration metric: took 393.8644ms for pod "kube-scheduler-ha-606900" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:25.596350    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-606900-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:25.793698    6988 request.go:629] Waited for 196.8083ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-606900-m02
	I0318 11:24:25.793698    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-606900-m02
	I0318 11:24:25.793698    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:25.793698    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:25.793698    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:25.802132    6988 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 11:24:25.983603    6988 request.go:629] Waited for 180.2108ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:24:25.984089    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m02
	I0318 11:24:25.984089    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:25.984089    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:25.984089    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:25.989660    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:25.989883    6988 pod_ready.go:92] pod "kube-scheduler-ha-606900-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 11:24:25.990453    6988 pod_ready.go:81] duration metric: took 394.1ms for pod "kube-scheduler-ha-606900-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:25.990453    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-606900-m03" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:26.187246    6988 request.go:629] Waited for 196.6486ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-606900-m03
	I0318 11:24:26.187541    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-606900-m03
	I0318 11:24:26.187541    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:26.187541    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:26.187541    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:26.193050    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:26.394022    6988 request.go:629] Waited for 199.003ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:26.394022    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes/ha-606900-m03
	I0318 11:24:26.394022    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:26.394022    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:26.394022    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:26.399948    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:24:26.400636    6988 pod_ready.go:92] pod "kube-scheduler-ha-606900-m03" in "kube-system" namespace has status "Ready":"True"
	I0318 11:24:26.400636    6988 pod_ready.go:81] duration metric: took 410.1806ms for pod "kube-scheduler-ha-606900-m03" in "kube-system" namespace to be "Ready" ...
	I0318 11:24:26.400636    6988 pod_ready.go:38] duration metric: took 6.8134106s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 11:24:26.400636    6988 api_server.go:52] waiting for apiserver process to appear ...
	I0318 11:24:26.413367    6988 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 11:24:26.443572    6988 api_server.go:72] duration metric: took 16.3709846s to wait for apiserver process to appear ...
	I0318 11:24:26.443717    6988 api_server.go:88] waiting for apiserver healthz status ...
	I0318 11:24:26.443776    6988 api_server.go:253] Checking apiserver healthz at https://172.25.148.74:8443/healthz ...
	I0318 11:24:26.457758    6988 api_server.go:279] https://172.25.148.74:8443/healthz returned 200:
	ok
	I0318 11:24:26.457966    6988 round_trippers.go:463] GET https://172.25.148.74:8443/version
	I0318 11:24:26.458008    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:26.458008    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:26.458008    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:26.460094    6988 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0318 11:24:26.460172    6988 api_server.go:141] control plane version: v1.28.4
	I0318 11:24:26.460252    6988 api_server.go:131] duration metric: took 16.5353ms to wait for apiserver health ...
	I0318 11:24:26.460361    6988 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 11:24:26.580196    6988 request.go:629] Waited for 119.6784ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods
	I0318 11:24:26.580196    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods
	I0318 11:24:26.580196    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:26.580676    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:26.580676    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:26.590728    6988 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0318 11:24:26.602452    6988 system_pods.go:59] 24 kube-system pods found
	I0318 11:24:26.602614    6988 system_pods.go:61] "coredns-5dd5756b68-jsf9x" [05681724-a32a-40c0-9f26-1c1eb9dffb65] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "coredns-5dd5756b68-wvh6v" [843ee0ec-fcfd-4763-8c92-acfe93bec900] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "etcd-ha-606900" [ed704c6d-aba3-496c-9988-c9f86218f1b4] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "etcd-ha-606900-m02" [a453b1e7-143c-4ea7-a1f4-f6dc6f8aa0b8] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "etcd-ha-606900-m03" [3c26d779-e97f-4226-8ae0-85ca512848cd] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "kindnet-8977g" [97e55124-90c8-4cda-854c-ee1059fafdac] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "kindnet-b68s4" [d2b7c03a-1303-4e1d-bf2b-2975716685d6] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "kindnet-xfbg7" [d871c099-0872-4d03-b1fc-4fe5554f09d1] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "kube-apiserver-ha-606900" [90f9b505-a404-4227-8a93-8d74ab235009] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "kube-apiserver-ha-606900-m02" [b3373a21-b66f-42c9-a088-97e3a86cd9fd] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "kube-apiserver-ha-606900-m03" [2a9bd19c-1d34-468f-9fd6-a82a198125eb] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "kube-controller-manager-ha-606900" [d3660558-d0d0-430f-baeb-912cef1a751f] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "kube-controller-manager-ha-606900-m02" [93c8139a-db05-4492-a62d-13ecabdadab6] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "kube-controller-manager-ha-606900-m03" [dc1f0a22-f7e8-452d-925a-cb7e628d7e65] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "kube-proxy-cjhcj" [9ab10380-cd1a-4487-9715-f82cb025149f] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "kube-proxy-fk4wg" [3b8fe48c-5035-4e97-9a79-73907e53d2ef] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "kube-proxy-s9lzf" [c0ba2c37-0dea-43c1-b2d4-ce36b6f6e9ff] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "kube-scheduler-ha-606900" [6efc4fea-f6fe-4057-96b0-fd62ba3aba5d] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "kube-scheduler-ha-606900-m02" [f1646aeb-90ea-46f7-a0f9-28b3b68f341c] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "kube-scheduler-ha-606900-m03" [657236e0-85b7-4161-866d-7892752bd59c] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "kube-vip-ha-606900" [540ec4bc-f9bc-4710-be1e-bb289e8cbea4] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "kube-vip-ha-606900-m02" [9063c185-9922-4ca7-82df-34db3af5f0be] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "kube-vip-ha-606900-m03" [7bc6ea01-5d48-405a-8023-ac6b7a3f406d] Running
	I0318 11:24:26.602614    6988 system_pods.go:61] "storage-provisioner" [d03b3748-8b89-4a55-9e0e-871a5b79532f] Running
	I0318 11:24:26.602614    6988 system_pods.go:74] duration metric: took 142.2524ms to wait for pod list to return data ...
	I0318 11:24:26.602614    6988 default_sa.go:34] waiting for default service account to be created ...
	I0318 11:24:26.785014    6988 request.go:629] Waited for 182.3985ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/default/serviceaccounts
	I0318 11:24:26.785014    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/default/serviceaccounts
	I0318 11:24:26.785014    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:26.785014    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:26.785014    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:26.791138    6988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 11:24:26.791351    6988 default_sa.go:45] found service account: "default"
	I0318 11:24:26.791426    6988 default_sa.go:55] duration metric: took 188.8111ms for default service account to be created ...
	I0318 11:24:26.791426    6988 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 11:24:26.988066    6988 request.go:629] Waited for 196.4447ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods
	I0318 11:24:26.988477    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/namespaces/kube-system/pods
	I0318 11:24:26.988477    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:26.988477    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:26.988477    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:26.999111    6988 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0318 11:24:27.010204    6988 system_pods.go:86] 24 kube-system pods found
	I0318 11:24:27.010204    6988 system_pods.go:89] "coredns-5dd5756b68-jsf9x" [05681724-a32a-40c0-9f26-1c1eb9dffb65] Running
	I0318 11:24:27.010204    6988 system_pods.go:89] "coredns-5dd5756b68-wvh6v" [843ee0ec-fcfd-4763-8c92-acfe93bec900] Running
	I0318 11:24:27.010204    6988 system_pods.go:89] "etcd-ha-606900" [ed704c6d-aba3-496c-9988-c9f86218f1b4] Running
	I0318 11:24:27.010204    6988 system_pods.go:89] "etcd-ha-606900-m02" [a453b1e7-143c-4ea7-a1f4-f6dc6f8aa0b8] Running
	I0318 11:24:27.010204    6988 system_pods.go:89] "etcd-ha-606900-m03" [3c26d779-e97f-4226-8ae0-85ca512848cd] Running
	I0318 11:24:27.010204    6988 system_pods.go:89] "kindnet-8977g" [97e55124-90c8-4cda-854c-ee1059fafdac] Running
	I0318 11:24:27.010204    6988 system_pods.go:89] "kindnet-b68s4" [d2b7c03a-1303-4e1d-bf2b-2975716685d6] Running
	I0318 11:24:27.010204    6988 system_pods.go:89] "kindnet-xfbg7" [d871c099-0872-4d03-b1fc-4fe5554f09d1] Running
	I0318 11:24:27.010204    6988 system_pods.go:89] "kube-apiserver-ha-606900" [90f9b505-a404-4227-8a93-8d74ab235009] Running
	I0318 11:24:27.010204    6988 system_pods.go:89] "kube-apiserver-ha-606900-m02" [b3373a21-b66f-42c9-a088-97e3a86cd9fd] Running
	I0318 11:24:27.010204    6988 system_pods.go:89] "kube-apiserver-ha-606900-m03" [2a9bd19c-1d34-468f-9fd6-a82a198125eb] Running
	I0318 11:24:27.010204    6988 system_pods.go:89] "kube-controller-manager-ha-606900" [d3660558-d0d0-430f-baeb-912cef1a751f] Running
	I0318 11:24:27.010204    6988 system_pods.go:89] "kube-controller-manager-ha-606900-m02" [93c8139a-db05-4492-a62d-13ecabdadab6] Running
	I0318 11:24:27.010836    6988 system_pods.go:89] "kube-controller-manager-ha-606900-m03" [dc1f0a22-f7e8-452d-925a-cb7e628d7e65] Running
	I0318 11:24:27.010836    6988 system_pods.go:89] "kube-proxy-cjhcj" [9ab10380-cd1a-4487-9715-f82cb025149f] Running
	I0318 11:24:27.010836    6988 system_pods.go:89] "kube-proxy-fk4wg" [3b8fe48c-5035-4e97-9a79-73907e53d2ef] Running
	I0318 11:24:27.010836    6988 system_pods.go:89] "kube-proxy-s9lzf" [c0ba2c37-0dea-43c1-b2d4-ce36b6f6e9ff] Running
	I0318 11:24:27.010836    6988 system_pods.go:89] "kube-scheduler-ha-606900" [6efc4fea-f6fe-4057-96b0-fd62ba3aba5d] Running
	I0318 11:24:27.010836    6988 system_pods.go:89] "kube-scheduler-ha-606900-m02" [f1646aeb-90ea-46f7-a0f9-28b3b68f341c] Running
	I0318 11:24:27.010836    6988 system_pods.go:89] "kube-scheduler-ha-606900-m03" [657236e0-85b7-4161-866d-7892752bd59c] Running
	I0318 11:24:27.010836    6988 system_pods.go:89] "kube-vip-ha-606900" [540ec4bc-f9bc-4710-be1e-bb289e8cbea4] Running
	I0318 11:24:27.011069    6988 system_pods.go:89] "kube-vip-ha-606900-m02" [9063c185-9922-4ca7-82df-34db3af5f0be] Running
	I0318 11:24:27.011069    6988 system_pods.go:89] "kube-vip-ha-606900-m03" [7bc6ea01-5d48-405a-8023-ac6b7a3f406d] Running
	I0318 11:24:27.011069    6988 system_pods.go:89] "storage-provisioner" [d03b3748-8b89-4a55-9e0e-871a5b79532f] Running
	I0318 11:24:27.011069    6988 system_pods.go:126] duration metric: took 219.6415ms to wait for k8s-apps to be running ...
	I0318 11:24:27.011069    6988 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 11:24:27.029692    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 11:24:27.061009    6988 system_svc.go:56] duration metric: took 49.9395ms WaitForService to wait for kubelet
	I0318 11:24:27.061090    6988 kubeadm.go:576] duration metric: took 16.9884992s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 11:24:27.061164    6988 node_conditions.go:102] verifying NodePressure condition ...
	I0318 11:24:27.191089    6988 request.go:629] Waited for 129.9241ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.74:8443/api/v1/nodes
	I0318 11:24:27.191561    6988 round_trippers.go:463] GET https://172.25.148.74:8443/api/v1/nodes
	I0318 11:24:27.191636    6988 round_trippers.go:469] Request Headers:
	I0318 11:24:27.191636    6988 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:24:27.191636    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:24:27.198262    6988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 11:24:27.199950    6988 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 11:24:27.200011    6988 node_conditions.go:123] node cpu capacity is 2
	I0318 11:24:27.200011    6988 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 11:24:27.200011    6988 node_conditions.go:123] node cpu capacity is 2
	I0318 11:24:27.200011    6988 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 11:24:27.200011    6988 node_conditions.go:123] node cpu capacity is 2
	I0318 11:24:27.200011    6988 node_conditions.go:105] duration metric: took 138.8468ms to run NodePressure ...
	I0318 11:24:27.200093    6988 start.go:240] waiting for startup goroutines ...
	I0318 11:24:27.200093    6988 start.go:254] writing updated cluster config ...
	I0318 11:24:27.213010    6988 ssh_runner.go:195] Run: rm -f paused
	I0318 11:24:27.365985    6988 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 11:24:27.369488    6988 out.go:177] * Done! kubectl is now configured to use "ha-606900" cluster and "default" namespace by default
	
	
	==> Docker <==
	Mar 18 11:20:08 ha-606900 dockerd[1335]: time="2024-03-18T11:20:08.208950221Z" level=info msg="shim disconnected" id=23a86fce80939cf98b998db897607584c302c28e79b6cda09523d78fb3250120 namespace=moby
	Mar 18 11:20:08 ha-606900 dockerd[1335]: time="2024-03-18T11:20:08.209811324Z" level=warning msg="cleaning up after shim disconnected" id=23a86fce80939cf98b998db897607584c302c28e79b6cda09523d78fb3250120 namespace=moby
	Mar 18 11:20:08 ha-606900 dockerd[1335]: time="2024-03-18T11:20:08.209976424Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 11:20:08 ha-606900 dockerd[1335]: time="2024-03-18T11:20:08.666490528Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 11:20:08 ha-606900 dockerd[1335]: time="2024-03-18T11:20:08.666883529Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 11:20:08 ha-606900 dockerd[1335]: time="2024-03-18T11:20:08.667220831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:20:08 ha-606900 dockerd[1335]: time="2024-03-18T11:20:08.668023433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:25:07 ha-606900 dockerd[1335]: time="2024-03-18T11:25:07.011047680Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 11:25:07 ha-606900 dockerd[1335]: time="2024-03-18T11:25:07.011099083Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 11:25:07 ha-606900 dockerd[1335]: time="2024-03-18T11:25:07.011127884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:25:07 ha-606900 dockerd[1335]: time="2024-03-18T11:25:07.011299993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:25:07 ha-606900 cri-dockerd[1221]: time="2024-03-18T11:25:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9dca65cf65d68e7faea72379814d24b03e693167d49b285033e2f3086b06f113/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Mar 18 11:25:08 ha-606900 cri-dockerd[1221]: time="2024-03-18T11:25:08Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Mar 18 11:25:08 ha-606900 dockerd[1335]: time="2024-03-18T11:25:08.887712484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 11:25:08 ha-606900 dockerd[1335]: time="2024-03-18T11:25:08.887944987Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 11:25:08 ha-606900 dockerd[1335]: time="2024-03-18T11:25:08.887968987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:25:08 ha-606900 dockerd[1335]: time="2024-03-18T11:25:08.888631395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:26:43 ha-606900 dockerd[1329]: 2024/03/18 11:26:43 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 11:26:43 ha-606900 dockerd[1329]: 2024/03/18 11:26:43 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 11:26:44 ha-606900 dockerd[1329]: 2024/03/18 11:26:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 11:26:44 ha-606900 dockerd[1329]: 2024/03/18 11:26:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 11:26:44 ha-606900 dockerd[1329]: 2024/03/18 11:26:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 11:26:44 ha-606900 dockerd[1329]: 2024/03/18 11:26:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 11:26:44 ha-606900 dockerd[1329]: 2024/03/18 11:26:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 11:26:44 ha-606900 dockerd[1329]: 2024/03/18 11:26:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	42469c7975926       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   17 minutes ago      Running             busybox                   0                   9dca65cf65d68       busybox-5b5d89c9d6-cqzzh
	4e7ce1aac9bdd       6e38f40d628db                                                                                         22 minutes ago      Running             storage-provisioner       1                   86e11d7aaf9c8       storage-provisioner
	567aff9e85a01       22aaebb38f4a9                                                                                         22 minutes ago      Running             kube-vip                  1                   fb6a851d39b23       kube-vip-ha-606900
	53cf29d4a3154       ead0a4a53df89                                                                                         26 minutes ago      Running             coredns                   0                   02eb20e1d0c5e       coredns-5dd5756b68-jsf9x
	bc7e44f9ada53       ead0a4a53df89                                                                                         26 minutes ago      Running             coredns                   0                   91f8065bd476d       coredns-5dd5756b68-wvh6v
	23a86fce80939       6e38f40d628db                                                                                         26 minutes ago      Exited              storage-provisioner       0                   86e11d7aaf9c8       storage-provisioner
	fa2d8375a385e       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              26 minutes ago      Running             kindnet-cni               0                   8ea494cab2f3d       kindnet-b68s4
	c37e249b1e7ad       83f6cc407eed8                                                                                         26 minutes ago      Running             kube-proxy                0                   92445efc60881       kube-proxy-fk4wg
	dd2765df77984       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     27 minutes ago      Exited              kube-vip                  0                   fb6a851d39b23       kube-vip-ha-606900
	d934400d5984a       d058aa5ab969c                                                                                         27 minutes ago      Running             kube-controller-manager   0                   6c00bb606d005       kube-controller-manager-ha-606900
	4befa99d2f5fe       e3db313c6dbc0                                                                                         27 minutes ago      Running             kube-scheduler            0                   69ea36325f7d7       kube-scheduler-ha-606900
	63cfa3b4e52bf       73deb9a3f7025                                                                                         27 minutes ago      Running             etcd                      0                   b537ceac57f36       etcd-ha-606900
	3851638f3614b       7fe0e6f37db33                                                                                         27 minutes ago      Running             kube-apiserver            0                   dd85a073d0853       kube-apiserver-ha-606900
	
	
	==> coredns [53cf29d4a315] <==
	[INFO] 10.244.2.2:39456 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000185302s
	[INFO] 10.244.0.4:37529 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000245603s
	[INFO] 10.244.0.4:36770 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000281103s
	[INFO] 10.244.0.4:40199 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000277203s
	[INFO] 10.244.0.4:40864 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000167901s
	[INFO] 10.244.0.4:49822 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000209402s
	[INFO] 10.244.1.3:47850 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000074901s
	[INFO] 10.244.1.3:43915 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098402s
	[INFO] 10.244.1.3:60347 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000089501s
	[INFO] 10.244.1.3:45351 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000228302s
	[INFO] 10.244.1.3:41131 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000227102s
	[INFO] 10.244.1.3:42169 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000184002s
	[INFO] 10.244.2.2:38698 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154001s
	[INFO] 10.244.2.2:35967 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000059001s
	[INFO] 10.244.2.2:49319 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075901s
	[INFO] 10.244.0.4:48977 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160201s
	[INFO] 10.244.0.4:35098 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000139301s
	[INFO] 10.244.0.4:52104 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000778s
	[INFO] 10.244.2.2:53693 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000227203s
	[INFO] 10.244.2.2:37606 - 5 "PTR IN 1.144.25.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000154802s
	[INFO] 10.244.0.4:36724 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000164502s
	[INFO] 10.244.0.4:39051 - 5 "PTR IN 1.144.25.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000063501s
	[INFO] 10.244.1.3:58746 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000222202s
	[INFO] 10.244.1.3:36273 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000325604s
	[INFO] 10.244.1.3:43340 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000649s
	
	
	==> coredns [bc7e44f9ada5] <==
	[INFO] 10.244.0.4:51884 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000206103s
	[INFO] 10.244.0.4:45821 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.092397107s
	[INFO] 10.244.1.3:57907 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000268803s
	[INFO] 10.244.1.3:57374 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000089901s
	[INFO] 10.244.2.2:41341 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105201s
	[INFO] 10.244.2.2:34889 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000220102s
	[INFO] 10.244.2.2:40850 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000212402s
	[INFO] 10.244.2.2:49832 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000463406s
	[INFO] 10.244.2.2:46881 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000192502s
	[INFO] 10.244.0.4:48306 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.014609865s
	[INFO] 10.244.0.4:37703 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.003025634s
	[INFO] 10.244.0.4:36985 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000134301s
	[INFO] 10.244.1.3:50891 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110501s
	[INFO] 10.244.1.3:50088 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000086701s
	[INFO] 10.244.2.2:38617 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000266503s
	[INFO] 10.244.0.4:39046 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000320903s
	[INFO] 10.244.1.3:50744 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000229402s
	[INFO] 10.244.1.3:60928 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000079201s
	[INFO] 10.244.1.3:49906 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000230102s
	[INFO] 10.244.1.3:48602 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000112201s
	[INFO] 10.244.2.2:37146 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129002s
	[INFO] 10.244.2.2:46050 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000120301s
	[INFO] 10.244.0.4:48098 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000266903s
	[INFO] 10.244.0.4:41403 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000123602s
	[INFO] 10.244.1.3:37883 - 5 "PTR IN 1.144.25.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000403004s
	
	
	==> describe nodes <==
	Name:               ha-606900
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-606900
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd
	                    minikube.k8s.io/name=ha-606900
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T11_15_51_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 11:15:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-606900
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 11:42:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 11:40:42 +0000   Mon, 18 Mar 2024 11:15:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 11:40:42 +0000   Mon, 18 Mar 2024 11:15:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 11:40:42 +0000   Mon, 18 Mar 2024 11:15:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 11:40:42 +0000   Mon, 18 Mar 2024 11:16:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.148.74
	  Hostname:    ha-606900
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 9864a1ae82b441db92445aa165d52416
	  System UUID:                a4deeaa1-108c-1843-84eb-dbf36f30972d
	  Boot ID:                    b831565d-b085-4bac-8a24-2cb98d43f687
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-cqzzh             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 coredns-5dd5756b68-jsf9x             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 coredns-5dd5756b68-wvh6v             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 etcd-ha-606900                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 kindnet-b68s4                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      26m
	  kube-system                 kube-apiserver-ha-606900             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-controller-manager-ha-606900    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-fk4wg                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-scheduler-ha-606900             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-vip-ha-606900                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26m   kube-proxy       
	  Normal  Starting                 27m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27m   kubelet          Node ha-606900 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m   kubelet          Node ha-606900 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m   kubelet          Node ha-606900 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           26m   node-controller  Node ha-606900 event: Registered Node ha-606900 in Controller
	  Normal  NodeReady                26m   kubelet          Node ha-606900 status is now: NodeReady
	  Normal  RegisteredNode           22m   node-controller  Node ha-606900 event: Registered Node ha-606900 in Controller
	  Normal  RegisteredNode           18m   node-controller  Node ha-606900 event: Registered Node ha-606900 in Controller
	
	
	Name:               ha-606900-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-606900-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd
	                    minikube.k8s.io/name=ha-606900
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T11_20_14_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 11:19:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-606900-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 11:42:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 11:40:53 +0000   Mon, 18 Mar 2024 11:19:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 11:40:53 +0000   Mon, 18 Mar 2024 11:19:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 11:40:53 +0000   Mon, 18 Mar 2024 11:19:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 11:40:53 +0000   Mon, 18 Mar 2024 11:20:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.148.106
	  Hostname:    ha-606900-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 ab288d88b56445ebb5abe7797a5e23d6
	  System UUID:                18ca2403-b35d-b145-a154-f437766cc0e4
	  Boot ID:                    4966d0bb-7373-40b5-bf9e-32c895219fd7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-qdlmz                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 etcd-ha-606900-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         23m
	  kube-system                 kindnet-8977g                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	  kube-system                 kube-apiserver-ha-606900-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-controller-manager-ha-606900-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-proxy-s9lzf                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-scheduler-ha-606900-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-vip-ha-606900-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        22m   kube-proxy       
	  Normal  RegisteredNode  23m   node-controller  Node ha-606900-m02 event: Registered Node ha-606900-m02 in Controller
	  Normal  RegisteredNode  22m   node-controller  Node ha-606900-m02 event: Registered Node ha-606900-m02 in Controller
	  Normal  RegisteredNode  18m   node-controller  Node ha-606900-m02 event: Registered Node ha-606900-m02 in Controller
	
	
	Name:               ha-606900-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-606900-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd
	                    minikube.k8s.io/name=ha-606900
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T11_24_09_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 11:24:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-606900-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 11:42:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 11:40:56 +0000   Mon, 18 Mar 2024 11:24:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 11:40:56 +0000   Mon, 18 Mar 2024 11:24:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 11:40:56 +0000   Mon, 18 Mar 2024 11:24:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 11:40:56 +0000   Mon, 18 Mar 2024 11:24:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.158.182
	  Hostname:    ha-606900-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 267d48d4e7d64d96be5e77ed1414dceb
	  System UUID:                f706f81a-2ac4-3d4d-a5c3-e84558596b81
	  Boot ID:                    4714b530-227e-4c69-a14d-1e68cd198ab2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-bsmjb                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 etcd-ha-606900-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 kindnet-xfbg7                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      18m
	  kube-system                 kube-apiserver-ha-606900-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-controller-manager-ha-606900-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-proxy-cjhcj                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-scheduler-ha-606900-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-vip-ha-606900-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        18m   kube-proxy       
	  Normal  RegisteredNode  18m   node-controller  Node ha-606900-m03 event: Registered Node ha-606900-m03 in Controller
	  Normal  RegisteredNode  18m   node-controller  Node ha-606900-m03 event: Registered Node ha-606900-m03 in Controller
	  Normal  RegisteredNode  18m   node-controller  Node ha-606900-m03 event: Registered Node ha-606900-m03 in Controller
	
	
	Name:               ha-606900-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-606900-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd
	                    minikube.k8s.io/name=ha-606900
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T11_30_04_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 11:30:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-606900-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 11:42:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 11:40:45 +0000   Mon, 18 Mar 2024 11:30:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 11:40:45 +0000   Mon, 18 Mar 2024 11:30:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 11:40:45 +0000   Mon, 18 Mar 2024 11:30:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 11:40:45 +0000   Mon, 18 Mar 2024 11:30:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.153.107
	  Hostname:    ha-606900-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 055abb3e5a6f435c8cf86d574ab6a3ac
	  System UUID:                167f5569-7b34-c34a-b41b-b27ad744bf35
	  Boot ID:                    c3e4274a-ea4f-4e67-9190-94c877d3c3ad
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-g95fl       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-d82s2    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x5 over 12m)  kubelet          Node ha-606900-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x5 over 12m)  kubelet          Node ha-606900-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x5 over 12m)  kubelet          Node ha-606900-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                node-controller  Node ha-606900-m04 event: Registered Node ha-606900-m04 in Controller
	  Normal  RegisteredNode           12m                node-controller  Node ha-606900-m04 event: Registered Node ha-606900-m04 in Controller
	  Normal  RegisteredNode           12m                node-controller  Node ha-606900-m04 event: Registered Node ha-606900-m04 in Controller
	  Normal  NodeReady                12m                kubelet          Node ha-606900-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Mar18 11:14] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.184104] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[Mar18 11:15] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +0.113546] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.575588] systemd-fstab-generator[980]: Ignoring "noauto" option for root device
	[  +0.211724] systemd-fstab-generator[992]: Ignoring "noauto" option for root device
	[  +0.231664] systemd-fstab-generator[1006]: Ignoring "noauto" option for root device
	[  +2.906269] systemd-fstab-generator[1174]: Ignoring "noauto" option for root device
	[  +0.211383] systemd-fstab-generator[1186]: Ignoring "noauto" option for root device
	[  +0.209542] systemd-fstab-generator[1198]: Ignoring "noauto" option for root device
	[  +0.297375] systemd-fstab-generator[1214]: Ignoring "noauto" option for root device
	[ +13.735575] systemd-fstab-generator[1320]: Ignoring "noauto" option for root device
	[  +0.128311] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.830205] systemd-fstab-generator[1521]: Ignoring "noauto" option for root device
	[  +6.678483] systemd-fstab-generator[1792]: Ignoring "noauto" option for root device
	[  +0.107563] kauditd_printk_skb: 73 callbacks suppressed
	[  +6.091100] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.954643] systemd-fstab-generator[2811]: Ignoring "noauto" option for root device
	[Mar18 11:16] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.548836] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.903384] kauditd_printk_skb: 29 callbacks suppressed
	[Mar18 11:19] hrtimer: interrupt took 1480705 ns
	[Mar18 11:20] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [63cfa3b4e52b] <==
	{"level":"warn","ts":"2024-03-18T11:39:27.753292Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T11:39:26.566571Z","time spent":"1.186632311s","remote":"127.0.0.1:33662","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/ha-606900-m04\" mod_revision:3944 > success:<request_put:<key:\"/registry/leases/kube-node-lease/ha-606900-m04\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/ha-606900-m04\" > >"}
	{"level":"warn","ts":"2024-03-18T11:39:27.755316Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"590.543529ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllers/\" range_end:\"/registry/controllers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-18T11:39:27.755377Z","caller":"traceutil/trace.go:171","msg":"trace[691507624] range","detail":"{range_begin:/registry/controllers/; range_end:/registry/controllers0; response_count:0; response_revision:3970; }","duration":"590.65153ms","start":"2024-03-18T11:39:27.16471Z","end":"2024-03-18T11:39:27.755362Z","steps":["trace[691507624] 'agreement among raft nodes before linearized reading'  (duration: 590.523329ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T11:39:27.755599Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T11:39:27.164692Z","time spent":"590.888833ms","remote":"127.0.0.1:33622","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":0,"response size":28,"request content":"key:\"/registry/controllers/\" range_end:\"/registry/controllers0\" count_only:true "}
	{"level":"warn","ts":"2024-03-18T11:39:27.756162Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"212.772757ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1110"}
	{"level":"info","ts":"2024-03-18T11:39:27.756231Z","caller":"traceutil/trace.go:171","msg":"trace[1004031968] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:3970; }","duration":"212.841757ms","start":"2024-03-18T11:39:27.543375Z","end":"2024-03-18T11:39:27.756217Z","steps":["trace[1004031968] 'agreement among raft nodes before linearized reading'  (duration: 212.729756ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T11:39:27.757088Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"249.157791ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-18T11:39:27.757144Z","caller":"traceutil/trace.go:171","msg":"trace[263696733] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:3970; }","duration":"249.215192ms","start":"2024-03-18T11:39:27.507921Z","end":"2024-03-18T11:39:27.757136Z","steps":["trace[263696733] 'agreement among raft nodes before linearized reading'  (duration: 249.133491ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T11:39:30.338496Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"c4c65f858326f0d8","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"356.857618ms"}
	{"level":"warn","ts":"2024-03-18T11:39:30.338858Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"9c726440491095bc","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"357.224022ms"}
	{"level":"info","ts":"2024-03-18T11:39:30.339794Z","caller":"traceutil/trace.go:171","msg":"trace[841875227] transaction","detail":"{read_only:false; response_revision:3975; number_of_response:1; }","duration":"553.395087ms","start":"2024-03-18T11:39:29.786382Z","end":"2024-03-18T11:39:30.339777Z","steps":["trace[841875227] 'process raft request'  (duration: 552.893082ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T11:39:30.340247Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T11:39:29.78637Z","time spent":"553.795391ms","remote":"127.0.0.1:33592","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1094,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:3971 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1021 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-03-18T11:39:30.345441Z","caller":"traceutil/trace.go:171","msg":"trace[2031171807] transaction","detail":"{read_only:false; response_revision:3976; number_of_response:1; }","duration":"558.653935ms","start":"2024-03-18T11:39:29.786767Z","end":"2024-03-18T11:39:30.345421Z","steps":["trace[2031171807] 'process raft request'  (duration: 558.548134ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T11:39:30.345634Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T11:39:29.786759Z","time spent":"558.818336ms","remote":"127.0.0.1:33662","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":420,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/plndr-cp-lock\" mod_revision:3974 > success:<request_put:<key:\"/registry/leases/kube-system/plndr-cp-lock\" value_size:370 >> failure:<request_range:<key:\"/registry/leases/kube-system/plndr-cp-lock\" > >"}
	{"level":"warn","ts":"2024-03-18T11:39:36.804882Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"c4c65f858326f0d8","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"137.113696ms"}
	{"level":"warn","ts":"2024-03-18T11:39:36.805277Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"9c726440491095bc","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"137.513ms"}
	{"level":"info","ts":"2024-03-18T11:39:36.806952Z","caller":"traceutil/trace.go:171","msg":"trace[435247509] linearizableReadLoop","detail":"{readStateIndex:4850; appliedIndex:4850; }","duration":"298.454241ms","start":"2024-03-18T11:39:36.508484Z","end":"2024-03-18T11:39:36.806938Z","steps":["trace[435247509] 'read index received'  (duration: 298.450341ms)","trace[435247509] 'applied index is now lower than readState.Index'  (duration: 2.9µs)"],"step_count":2}
	{"level":"warn","ts":"2024-03-18T11:39:36.845911Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"337.453099ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-18T11:39:36.846333Z","caller":"traceutil/trace.go:171","msg":"trace[1322244681] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:3990; }","duration":"337.881603ms","start":"2024-03-18T11:39:36.508432Z","end":"2024-03-18T11:39:36.846314Z","steps":["trace[1322244681] 'agreement among raft nodes before linearized reading'  (duration: 298.627143ms)","trace[1322244681] 'range keys from in-memory index tree'  (duration: 38.630355ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-18T11:39:36.846527Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T11:39:36.508416Z","time spent":"338.097606ms","remote":"127.0.0.1:33404","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-03-18T11:39:36.846967Z","caller":"traceutil/trace.go:171","msg":"trace[1661608310] transaction","detail":"{read_only:false; response_revision:3991; number_of_response:1; }","duration":"371.626914ms","start":"2024-03-18T11:39:36.475327Z","end":"2024-03-18T11:39:36.846954Z","steps":["trace[1661608310] 'process raft request'  (duration: 330.424536ms)","trace[1661608310] 'compare'  (duration: 40.174469ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-18T11:39:36.847345Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T11:39:36.475313Z","time spent":"371.711015ms","remote":"127.0.0.1:33662","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":420,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/plndr-cp-lock\" mod_revision:3988 > success:<request_put:<key:\"/registry/leases/kube-system/plndr-cp-lock\" value_size:370 >> failure:<request_range:<key:\"/registry/leases/kube-system/plndr-cp-lock\" > >"}
	{"level":"info","ts":"2024-03-18T11:40:42.485219Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":3418}
	{"level":"info","ts":"2024-03-18T11:40:42.514903Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":3418,"took":"28.729262ms","hash":2041113269}
	{"level":"info","ts":"2024-03-18T11:40:42.514974Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2041113269,"revision":3418,"compact-revision":2677}
	
	
	==> kernel <==
	 11:42:57 up 29 min,  0 users,  load average: 0.43, 0.55, 0.51
	Linux ha-606900 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [fa2d8375a385] <==
	I0318 11:42:23.849363       1 main.go:250] Node ha-606900-m04 has CIDR [10.244.3.0/24] 
	I0318 11:42:33.880310       1 main.go:223] Handling node with IPs: map[172.25.148.74:{}]
	I0318 11:42:33.880455       1 main.go:227] handling current node
	I0318 11:42:33.880617       1 main.go:223] Handling node with IPs: map[172.25.148.106:{}]
	I0318 11:42:33.880850       1 main.go:250] Node ha-606900-m02 has CIDR [10.244.1.0/24] 
	I0318 11:42:33.881305       1 main.go:223] Handling node with IPs: map[172.25.158.182:{}]
	I0318 11:42:33.881416       1 main.go:250] Node ha-606900-m03 has CIDR [10.244.2.0/24] 
	I0318 11:42:33.881758       1 main.go:223] Handling node with IPs: map[172.25.153.107:{}]
	I0318 11:42:33.882232       1 main.go:250] Node ha-606900-m04 has CIDR [10.244.3.0/24] 
	I0318 11:42:43.897289       1 main.go:223] Handling node with IPs: map[172.25.148.74:{}]
	I0318 11:42:43.897404       1 main.go:227] handling current node
	I0318 11:42:43.897422       1 main.go:223] Handling node with IPs: map[172.25.148.106:{}]
	I0318 11:42:43.897431       1 main.go:250] Node ha-606900-m02 has CIDR [10.244.1.0/24] 
	I0318 11:42:43.897621       1 main.go:223] Handling node with IPs: map[172.25.158.182:{}]
	I0318 11:42:43.898030       1 main.go:250] Node ha-606900-m03 has CIDR [10.244.2.0/24] 
	I0318 11:42:43.898234       1 main.go:223] Handling node with IPs: map[172.25.153.107:{}]
	I0318 11:42:43.898369       1 main.go:250] Node ha-606900-m04 has CIDR [10.244.3.0/24] 
	I0318 11:42:53.913145       1 main.go:223] Handling node with IPs: map[172.25.148.74:{}]
	I0318 11:42:53.913312       1 main.go:227] handling current node
	I0318 11:42:53.913329       1 main.go:223] Handling node with IPs: map[172.25.148.106:{}]
	I0318 11:42:53.913338       1 main.go:250] Node ha-606900-m02 has CIDR [10.244.1.0/24] 
	I0318 11:42:53.914027       1 main.go:223] Handling node with IPs: map[172.25.158.182:{}]
	I0318 11:42:53.914144       1 main.go:250] Node ha-606900-m03 has CIDR [10.244.2.0/24] 
	I0318 11:42:53.914609       1 main.go:223] Handling node with IPs: map[172.25.153.107:{}]
	I0318 11:42:53.915107       1 main.go:250] Node ha-606900-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [3851638f3614] <==
	Trace[1771476870]: [507.344276ms] [507.344276ms] END
	I0318 11:36:27.223118       1 trace.go:236] Trace[1057873536]: "Update" accept:application/json, */*,audit-id:57eb5d62-79e8-400f-90e6-6ce121fe0322,client:127.0.0.1,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (18-Mar-2024 11:36:26.095) (total time: 1127ms):
	Trace[1057873536]: ["GuaranteedUpdate etcd3" audit-id:57eb5d62-79e8-400f-90e6-6ce121fe0322,key:/leases/kube-system/plndr-cp-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 1127ms (11:36:26.095)
	Trace[1057873536]:  ---"Txn call completed" 1125ms (11:36:27.222)]
	Trace[1057873536]: [1.127286612s] [1.127286612s] END
	I0318 11:36:29.151961       1 trace.go:236] Trace[1691059937]: "Update" accept:application/json, */*,audit-id:714095dc-686d-4069-92a3-b24f79d77d67,client:127.0.0.1,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (18-Mar-2024 11:36:28.233) (total time: 918ms):
	Trace[1691059937]: ["GuaranteedUpdate etcd3" audit-id:714095dc-686d-4069-92a3-b24f79d77d67,key:/leases/kube-system/plndr-cp-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 917ms (11:36:28.234)
	Trace[1691059937]:  ---"Txn call completed" 916ms (11:36:29.151)]
	Trace[1691059937]: [918.09764ms] [918.09764ms] END
	I0318 11:39:27.754398       1 trace.go:236] Trace[1080124676]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:013c475a-2d26-4824-86eb-440d1d284031,client:172.25.153.107,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-606900-m04,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PUT (18-Mar-2024 11:39:26.564) (total time: 1190ms):
	Trace[1080124676]: ["GuaranteedUpdate etcd3" audit-id:013c475a-2d26-4824-86eb-440d1d284031,key:/leases/kube-node-lease/ha-606900-m04,type:*coordination.Lease,resource:leases.coordination.k8s.io 1189ms (11:39:26.564)
	Trace[1080124676]:  ---"Txn call completed" 1188ms (11:39:27.754)]
	Trace[1080124676]: [1.190119943s] [1.190119943s] END
	I0318 11:39:27.756572       1 trace.go:236] Trace[2141120928]: "Update" accept:application/json, */*,audit-id:6439cd77-f131-42de-a459-17e085585c04,client:127.0.0.1,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (18-Mar-2024 11:39:26.531) (total time: 1225ms):
	Trace[2141120928]: ["GuaranteedUpdate etcd3" audit-id:6439cd77-f131-42de-a459-17e085585c04,key:/leases/kube-system/plndr-cp-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 1224ms (11:39:26.531)
	Trace[2141120928]:  ---"Txn call completed" 1223ms (11:39:27.756)]
	Trace[2141120928]: [1.225260266s] [1.225260266s] END
	I0318 11:39:30.341171       1 trace.go:236] Trace[941452151]: "Update" accept:application/json, */*,audit-id:a9a96df0-2cce-410f-9592-e1a376919c8c,client:172.25.148.74,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (18-Mar-2024 11:39:29.784) (total time: 556ms):
	Trace[941452151]: ["GuaranteedUpdate etcd3" audit-id:a9a96df0-2cce-410f-9592-e1a376919c8c,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 555ms (11:39:29.785)
	Trace[941452151]:  ---"Txn call completed" 554ms (11:39:30.340)]
	Trace[941452151]: [556.730318ms] [556.730318ms] END
	I0318 11:39:30.347280       1 trace.go:236] Trace[740286283]: "Update" accept:application/json, */*,audit-id:016e3536-78f7-4df1-bb45-42c1bf1a5525,client:127.0.0.1,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (18-Mar-2024 11:39:29.785) (total time: 561ms):
	Trace[740286283]: ["GuaranteedUpdate etcd3" audit-id:016e3536-78f7-4df1-bb45-42c1bf1a5525,key:/leases/kube-system/plndr-cp-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 561ms (11:39:29.785)
	Trace[740286283]:  ---"Txn call completed" 560ms (11:39:30.346)]
	Trace[740286283]: [561.34866ms] [561.34866ms] END
	
	
	==> kube-controller-manager [d934400d5984] <==
	I0318 11:25:06.598797       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="435.823µs"
	I0318 11:25:09.194109       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="63.601µs"
	I0318 11:25:09.238358       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="19.523324ms"
	I0318 11:25:09.238852       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="329.404µs"
	I0318 11:25:09.395995       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="27.262012ms"
	I0318 11:25:09.396711       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="93.901µs"
	I0318 11:25:09.929181       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="132.702µs"
	I0318 11:25:10.137005       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="64.947743ms"
	I0318 11:25:10.137514       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="439.405µs"
	I0318 11:25:39.578252       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="66.701µs"
	I0318 11:25:40.593497       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="58.601µs"
	I0318 11:25:40.621742       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="48.001µs"
	I0318 11:25:40.634734       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="117.401µs"
	E0318 11:30:01.879546       1 certificate_controller.go:146] Sync csr-xp4xf failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-xp4xf": the object has been modified; please apply your changes to the latest version and try again
	I0318 11:30:03.442798       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-606900-m04\" does not exist"
	I0318 11:30:03.481117       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-g95fl"
	I0318 11:30:03.509447       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-d82s2"
	I0318 11:30:03.530245       1 range_allocator.go:380] "Set node PodCIDR" node="ha-606900-m04" podCIDRs=["10.244.3.0/24"]
	I0318 11:30:03.665952       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-zs54s"
	I0318 11:30:03.687245       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-hcl2x"
	I0318 11:30:03.939863       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-jm8cw"
	I0318 11:30:04.003393       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-x6c68"
	I0318 11:30:07.967221       1 event.go:307] "Event occurred" object="ha-606900-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-606900-m04 event: Registered Node ha-606900-m04 in Controller"
	I0318 11:30:08.460989       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-606900-m04"
	I0318 11:30:25.183087       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-606900-m04"
	
	
	==> kube-proxy [c37e249b1e7a] <==
	I0318 11:16:04.281003       1 server_others.go:69] "Using iptables proxy"
	I0318 11:16:04.298839       1 node.go:141] Successfully retrieved node IP: 172.25.148.74
	I0318 11:16:04.383562       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 11:16:04.383778       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 11:16:04.387997       1 server_others.go:152] "Using iptables Proxier"
	I0318 11:16:04.388202       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 11:16:04.388399       1 server.go:846] "Version info" version="v1.28.4"
	I0318 11:16:04.388499       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 11:16:04.390399       1 config.go:188] "Starting service config controller"
	I0318 11:16:04.390615       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 11:16:04.390978       1 config.go:97] "Starting endpoint slice config controller"
	I0318 11:16:04.391132       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 11:16:04.391922       1 config.go:315] "Starting node config controller"
	I0318 11:16:04.392004       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 11:16:04.492379       1 shared_informer.go:318] Caches are synced for service config
	I0318 11:16:04.492379       1 shared_informer.go:318] Caches are synced for node config
	I0318 11:16:04.492444       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [4befa99d2f5f] <==
	E0318 11:15:46.509501       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0318 11:15:46.509712       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 11:15:46.509773       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0318 11:15:46.563356       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 11:15:46.563414       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 11:15:46.641919       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0318 11:15:46.642056       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0318 11:15:46.747094       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0318 11:15:46.749032       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0318 11:15:46.902355       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0318 11:15:46.902529       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0318 11:15:46.966114       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 11:15:46.966290       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 11:15:47.008866       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0318 11:15:47.010165       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0318 11:15:47.013882       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 11:15:47.013908       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0318 11:15:47.052615       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0318 11:15:47.052891       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 11:15:48.642770       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 11:25:06.004315       1 cache.go:518] "Pod was added to a different node than it was assumed" podKey="1f9b2790-c02e-4e41-b946-d6272e6410fd" pod="default/busybox-5b5d89c9d6-5dnk2" assumedNode="ha-606900-m02" currentNode="ha-606900-m03"
	E0318 11:25:06.005748       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-5dnk2\": pod busybox-5b5d89c9d6-5dnk2 is already assigned to node \"ha-606900-m02\"" plugin="DefaultBinder" pod="default/busybox-5b5d89c9d6-5dnk2" node="ha-606900-m03"
	E0318 11:25:06.007616       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 1f9b2790-c02e-4e41-b946-d6272e6410fd(default/busybox-5b5d89c9d6-5dnk2) was assumed on ha-606900-m03 but assigned to ha-606900-m02"
	E0318 11:25:06.011953       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-5dnk2\": pod busybox-5b5d89c9d6-5dnk2 is already assigned to node \"ha-606900-m02\"" pod="default/busybox-5b5d89c9d6-5dnk2"
	I0318 11:25:06.012609       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-5b5d89c9d6-5dnk2" node="ha-606900-m02"
	
	
	==> kubelet <==
	Mar 18 11:38:49 ha-606900 kubelet[2850]: E0318 11:38:49.940162    2850 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 11:38:49 ha-606900 kubelet[2850]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 11:38:49 ha-606900 kubelet[2850]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 11:38:49 ha-606900 kubelet[2850]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 11:38:49 ha-606900 kubelet[2850]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 11:39:49 ha-606900 kubelet[2850]: E0318 11:39:49.942787    2850 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 11:39:49 ha-606900 kubelet[2850]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 11:39:49 ha-606900 kubelet[2850]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 11:39:49 ha-606900 kubelet[2850]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 11:39:49 ha-606900 kubelet[2850]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 11:40:49 ha-606900 kubelet[2850]: E0318 11:40:49.949447    2850 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 11:40:49 ha-606900 kubelet[2850]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 11:40:49 ha-606900 kubelet[2850]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 11:40:49 ha-606900 kubelet[2850]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 11:40:49 ha-606900 kubelet[2850]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 11:41:49 ha-606900 kubelet[2850]: E0318 11:41:49.941732    2850 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 11:41:49 ha-606900 kubelet[2850]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 11:41:49 ha-606900 kubelet[2850]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 11:41:49 ha-606900 kubelet[2850]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 11:41:49 ha-606900 kubelet[2850]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 11:42:49 ha-606900 kubelet[2850]: E0318 11:42:49.940903    2850 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 11:42:49 ha-606900 kubelet[2850]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 11:42:49 ha-606900 kubelet[2850]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 11:42:49 ha-606900 kubelet[2850]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 11:42:49 ha-606900 kubelet[2850]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 11:42:48.805968   12792 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-606900 -n ha-606900
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-606900 -n ha-606900: (12.8453409s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-606900 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/CopyFile FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/CopyFile (680.72s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (58.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-642600 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-642600 -- exec busybox-5b5d89c9d6-48qkw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-642600 -- exec busybox-5b5d89c9d6-48qkw -- sh -c "ping -c 1 172.25.144.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-642600 -- exec busybox-5b5d89c9d6-48qkw -- sh -c "ping -c 1 172.25.144.1": exit status 1 (10.5111064s)

                                                
                                                
-- stdout --
	PING 172.25.144.1 (172.25.144.1): 56 data bytes
	
	--- 172.25.144.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 12:23:12.291390   13692 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.25.144.1) from pod (busybox-5b5d89c9d6-48qkw): exit status 1
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-642600 -- exec busybox-5b5d89c9d6-hmhdf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-642600 -- exec busybox-5b5d89c9d6-hmhdf -- sh -c "ping -c 1 172.25.144.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-642600 -- exec busybox-5b5d89c9d6-hmhdf -- sh -c "ping -c 1 172.25.144.1": exit status 1 (10.5428127s)

                                                
                                                
-- stdout --
	PING 172.25.144.1 (172.25.144.1): 56 data bytes
	
	--- 172.25.144.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 12:23:23.358498   13568 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.25.144.1) from pod (busybox-5b5d89c9d6-hmhdf): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-642600 -n multinode-642600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-642600 -n multinode-642600: (12.5839683s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-642600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-642600 logs -n 25: (9.0220288s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-751800 ssh -- ls                    | mount-start-2-751800 | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:11 UTC | 18 Mar 24 12:11 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-1-751800                           | mount-start-1-751800 | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:11 UTC | 18 Mar 24 12:12 UTC |
	|         | --alsologtostderr -v=5                            |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-751800 ssh -- ls                    | mount-start-2-751800 | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:12 UTC | 18 Mar 24 12:12 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| stop    | -p mount-start-2-751800                           | mount-start-2-751800 | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:12 UTC | 18 Mar 24 12:12 UTC |
	| start   | -p mount-start-2-751800                           | mount-start-2-751800 | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:12 UTC | 18 Mar 24 12:14 UTC |
	| mount   | C:\Users\jenkins.minikube6:/minikube-host         | mount-start-2-751800 | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:14 UTC |                     |
	|         | --profile mount-start-2-751800 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-751800 ssh -- ls                    | mount-start-2-751800 | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:14 UTC | 18 Mar 24 12:15 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-2-751800                           | mount-start-2-751800 | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:15 UTC | 18 Mar 24 12:15 UTC |
	| delete  | -p mount-start-1-751800                           | mount-start-1-751800 | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:15 UTC | 18 Mar 24 12:15 UTC |
	| start   | -p multinode-642600                               | multinode-642600     | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:15 UTC | 18 Mar 24 12:22 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-642600 -- apply -f                   | multinode-642600     | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:23 UTC | 18 Mar 24 12:23 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-642600 -- rollout                    | multinode-642600     | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:23 UTC | 18 Mar 24 12:23 UTC |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-642600 -- get pods -o                | multinode-642600     | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:23 UTC | 18 Mar 24 12:23 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-642600 -- get pods -o                | multinode-642600     | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:23 UTC | 18 Mar 24 12:23 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-642600 -- exec                       | multinode-642600     | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:23 UTC | 18 Mar 24 12:23 UTC |
	|         | busybox-5b5d89c9d6-48qkw --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-642600 -- exec                       | multinode-642600     | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:23 UTC | 18 Mar 24 12:23 UTC |
	|         | busybox-5b5d89c9d6-hmhdf --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-642600 -- exec                       | multinode-642600     | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:23 UTC | 18 Mar 24 12:23 UTC |
	|         | busybox-5b5d89c9d6-48qkw --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-642600 -- exec                       | multinode-642600     | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:23 UTC | 18 Mar 24 12:23 UTC |
	|         | busybox-5b5d89c9d6-hmhdf --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-642600 -- exec                       | multinode-642600     | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:23 UTC | 18 Mar 24 12:23 UTC |
	|         | busybox-5b5d89c9d6-48qkw -- nslookup              |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-642600 -- exec                       | multinode-642600     | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:23 UTC | 18 Mar 24 12:23 UTC |
	|         | busybox-5b5d89c9d6-hmhdf -- nslookup              |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-642600 -- get pods -o                | multinode-642600     | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:23 UTC | 18 Mar 24 12:23 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-642600 -- exec                       | multinode-642600     | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:23 UTC | 18 Mar 24 12:23 UTC |
	|         | busybox-5b5d89c9d6-48qkw                          |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-642600 -- exec                       | multinode-642600     | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:23 UTC |                     |
	|         | busybox-5b5d89c9d6-48qkw -- sh                    |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.25.144.1                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-642600 -- exec                       | multinode-642600     | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:23 UTC | 18 Mar 24 12:23 UTC |
	|         | busybox-5b5d89c9d6-hmhdf                          |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-642600 -- exec                       | multinode-642600     | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:23 UTC |                     |
	|         | busybox-5b5d89c9d6-hmhdf -- sh                    |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.25.144.1                         |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 12:15:33
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 12:15:33.390999    2644 out.go:291] Setting OutFile to fd 1064 ...
	I0318 12:15:33.391994    2644 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:15:33.391994    2644 out.go:304] Setting ErrFile to fd 740...
	I0318 12:15:33.391994    2644 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:15:33.419781    2644 out.go:298] Setting JSON to false
	I0318 12:15:33.423194    2644 start.go:129] hostinfo: {"hostname":"minikube6","uptime":139457,"bootTime":1710624675,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0318 12:15:33.423320    2644 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 12:15:33.433242    2644 out.go:177] * [multinode-642600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	I0318 12:15:33.439079    2644 notify.go:220] Checking for updates...
	I0318 12:15:33.442704    2644 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0318 12:15:33.447764    2644 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 12:15:33.453502    2644 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0318 12:15:33.459468    2644 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 12:15:33.465413    2644 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 12:15:33.470567    2644 config.go:182] Loaded profile config "ha-606900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 12:15:33.471391    2644 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 12:15:38.989488    2644 out.go:177] * Using the hyperv driver based on user configuration
	I0318 12:15:38.993440    2644 start.go:297] selected driver: hyperv
	I0318 12:15:38.993440    2644 start.go:901] validating driver "hyperv" against <nil>
	I0318 12:15:38.993440    2644 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 12:15:39.045073    2644 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 12:15:39.047162    2644 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 12:15:39.047423    2644 cni.go:84] Creating CNI manager for ""
	I0318 12:15:39.047478    2644 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0318 12:15:39.047478    2644 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0318 12:15:39.047550    2644 start.go:340] cluster config:
	{Name:multinode-642600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-642600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 12:15:39.047550    2644 iso.go:125] acquiring lock: {Name:mk859ea173f7c19f70b69d7017f4a5a661cd1500 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 12:15:39.052180    2644 out.go:177] * Starting "multinode-642600" primary control-plane node in "multinode-642600" cluster
	I0318 12:15:39.054233    2644 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 12:15:39.054233    2644 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0318 12:15:39.054233    2644 cache.go:56] Caching tarball of preloaded images
	I0318 12:15:39.054880    2644 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0318 12:15:39.054880    2644 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 12:15:39.054880    2644 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\config.json ...
	I0318 12:15:39.055496    2644 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\config.json: {Name:mk422368cc309deed087c6a449296e48a3ec2fa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:15:39.056262    2644 start.go:360] acquireMachinesLock for multinode-642600: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 12:15:39.056262    2644 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-642600"
	I0318 12:15:39.056262    2644 start.go:93] Provisioning new machine with config: &{Name:multinode-642600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.28.4 ClusterName:multinode-642600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 12:15:39.056262    2644 start.go:125] createHost starting for "" (driver="hyperv")
	I0318 12:15:39.060252    2644 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 12:15:39.061353    2644 start.go:159] libmachine.API.Create for "multinode-642600" (driver="hyperv")
	I0318 12:15:39.062259    2644 client.go:168] LocalClient.Create starting
	I0318 12:15:39.063262    2644 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0318 12:15:39.063262    2644 main.go:141] libmachine: Decoding PEM data...
	I0318 12:15:39.063262    2644 main.go:141] libmachine: Parsing certificate...
	I0318 12:15:39.063262    2644 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0318 12:15:39.063262    2644 main.go:141] libmachine: Decoding PEM data...
	I0318 12:15:39.064260    2644 main.go:141] libmachine: Parsing certificate...
	I0318 12:15:39.064260    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0318 12:15:41.287511    2644 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0318 12:15:41.288209    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:15:41.288209    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0318 12:15:43.088088    2644 main.go:141] libmachine: [stdout =====>] : False
	
	I0318 12:15:43.088088    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:15:43.088247    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0318 12:15:44.669689    2644 main.go:141] libmachine: [stdout =====>] : True
	
	I0318 12:15:44.670031    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:15:44.670031    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0318 12:15:48.612120    2644 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0318 12:15:48.612120    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:15:48.614750    2644 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0318 12:15:49.181439    2644 main.go:141] libmachine: Creating SSH key...
	I0318 12:15:49.599178    2644 main.go:141] libmachine: Creating VM...
	I0318 12:15:49.599178    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0318 12:15:52.680697    2644 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0318 12:15:52.680792    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:15:52.680876    2644 main.go:141] libmachine: Using switch "Default Switch"
	I0318 12:15:52.680876    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0318 12:15:54.519642    2644 main.go:141] libmachine: [stdout =====>] : True
	
	I0318 12:15:54.520616    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:15:54.520616    2644 main.go:141] libmachine: Creating VHD
	I0318 12:15:54.520865    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-642600\fixed.vhd' -SizeBytes 10MB -Fixed
	I0318 12:15:58.398728    2644 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-642600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 2886B19B-CC32-4FAF-AAD4-9972490C9F1B
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0318 12:15:58.425471    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:15:58.426360    2644 main.go:141] libmachine: Writing magic tar header
	I0318 12:15:58.426456    2644 main.go:141] libmachine: Writing SSH key tar header
	I0318 12:15:58.435716    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-642600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-642600\disk.vhd' -VHDType Dynamic -DeleteSource
	I0318 12:16:01.671666    2644 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:16:01.671666    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:16:01.671780    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-642600\disk.vhd' -SizeBytes 20000MB
	I0318 12:16:04.309761    2644 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:16:04.309761    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:16:04.310331    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-642600 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-642600' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0318 12:16:08.126173    2644 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-642600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0318 12:16:08.126173    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:16:08.126173    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-642600 -DynamicMemoryEnabled $false
	I0318 12:16:10.484198    2644 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:16:10.484198    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:16:10.484650    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-642600 -Count 2
	I0318 12:16:12.778244    2644 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:16:12.778244    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:16:12.778513    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-642600 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-642600\boot2docker.iso'
	I0318 12:16:15.498513    2644 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:16:15.498513    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:16:15.499471    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-642600 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-642600\disk.vhd'
	I0318 12:16:18.335288    2644 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:16:18.335501    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:16:18.335501    2644 main.go:141] libmachine: Starting VM...
	I0318 12:16:18.335501    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-642600
	I0318 12:16:21.532040    2644 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:16:21.532040    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:16:21.532040    2644 main.go:141] libmachine: Waiting for host to start...
	I0318 12:16:21.532040    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:16:23.926282    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:16:23.926282    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:16:23.926783    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:16:26.561338    2644 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:16:26.561455    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:16:27.573003    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:16:29.883334    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:16:29.883334    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:16:29.883967    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:16:32.485782    2644 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:16:32.485831    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:16:33.500753    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:16:35.817424    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:16:35.818389    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:16:35.818523    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:16:38.465022    2644 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:16:38.465301    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:16:39.479530    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:16:41.749940    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:16:41.750139    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:16:41.750312    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:16:44.360623    2644 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:16:44.360685    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:16:45.361918    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:16:47.693963    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:16:47.693963    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:16:47.694251    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:16:50.414533    2644 main.go:141] libmachine: [stdout =====>] : 172.25.151.112
	
	I0318 12:16:50.414533    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:16:50.414595    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:16:52.592573    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:16:52.592573    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:16:52.592573    2644 machine.go:94] provisionDockerMachine start ...
	I0318 12:16:52.593857    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:16:54.827475    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:16:54.827475    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:16:54.828538    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:16:57.503660    2644 main.go:141] libmachine: [stdout =====>] : 172.25.151.112
	
	I0318 12:16:57.503660    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:16:57.510167    2644 main.go:141] libmachine: Using SSH client type: native
	I0318 12:16:57.521713    2644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.151.112 22 <nil> <nil>}
	I0318 12:16:57.521713    2644 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 12:16:57.656739    2644 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 12:16:57.656827    2644 buildroot.go:166] provisioning hostname "multinode-642600"
	I0318 12:16:57.656827    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:16:59.878994    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:16:59.878994    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:16:59.879654    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:17:02.508787    2644 main.go:141] libmachine: [stdout =====>] : 172.25.151.112
	
	I0318 12:17:02.508787    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:17:02.516277    2644 main.go:141] libmachine: Using SSH client type: native
	I0318 12:17:02.516818    2644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.151.112 22 <nil> <nil>}
	I0318 12:17:02.516910    2644 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-642600 && echo "multinode-642600" | sudo tee /etc/hostname
	I0318 12:17:02.684850    2644 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-642600
	
	I0318 12:17:02.685014    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:17:04.949620    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:17:04.950120    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:17:04.950318    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:17:07.620900    2644 main.go:141] libmachine: [stdout =====>] : 172.25.151.112
	
	I0318 12:17:07.620900    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:17:07.627018    2644 main.go:141] libmachine: Using SSH client type: native
	I0318 12:17:07.627729    2644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.151.112 22 <nil> <nil>}
	I0318 12:17:07.627729    2644 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-642600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-642600/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-642600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 12:17:07.786392    2644 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 12:17:07.786392    2644 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0318 12:17:07.786392    2644 buildroot.go:174] setting up certificates
	I0318 12:17:07.786392    2644 provision.go:84] configureAuth start
	I0318 12:17:07.786392    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:17:10.036397    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:17:10.036397    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:17:10.037401    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:17:12.691998    2644 main.go:141] libmachine: [stdout =====>] : 172.25.151.112
	
	I0318 12:17:12.691998    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:17:12.691998    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:17:14.881188    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:17:14.881188    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:17:14.881304    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:17:17.544148    2644 main.go:141] libmachine: [stdout =====>] : 172.25.151.112
	
	I0318 12:17:17.544148    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:17:17.544269    2644 provision.go:143] copyHostCerts
	I0318 12:17:17.544383    2644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0318 12:17:17.544710    2644 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0318 12:17:17.544792    2644 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0318 12:17:17.545263    2644 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0318 12:17:17.546510    2644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0318 12:17:17.546804    2644 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0318 12:17:17.546804    2644 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0318 12:17:17.547199    2644 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0318 12:17:17.548221    2644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0318 12:17:17.548479    2644 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0318 12:17:17.548479    2644 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0318 12:17:17.548875    2644 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0318 12:17:17.549839    2644 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-642600 san=[127.0.0.1 172.25.151.112 localhost minikube multinode-642600]
	I0318 12:17:17.952837    2644 provision.go:177] copyRemoteCerts
	I0318 12:17:17.964831    2644 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 12:17:17.964831    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:17:20.191752    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:17:20.191833    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:17:20.191928    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:17:22.821194    2644 main.go:141] libmachine: [stdout =====>] : 172.25.151.112
	
	I0318 12:17:22.821194    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:17:22.821194    2644 sshutil.go:53] new ssh client: &{IP:172.25.151.112 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-642600\id_rsa Username:docker}
	I0318 12:17:22.924665    2644 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9597277s)
	I0318 12:17:22.924785    2644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0318 12:17:22.924836    2644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 12:17:22.972625    2644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0318 12:17:22.973187    2644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0318 12:17:23.029453    2644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0318 12:17:23.030103    2644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 12:17:23.077422    2644 provision.go:87] duration metric: took 15.2909366s to configureAuth
	I0318 12:17:23.077422    2644 buildroot.go:189] setting minikube options for container-runtime
	I0318 12:17:23.078256    2644 config.go:182] Loaded profile config "multinode-642600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 12:17:23.078384    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:17:25.276811    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:17:25.276923    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:17:25.276923    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:17:27.963251    2644 main.go:141] libmachine: [stdout =====>] : 172.25.151.112
	
	I0318 12:17:27.963251    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:17:27.970840    2644 main.go:141] libmachine: Using SSH client type: native
	I0318 12:17:27.971276    2644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.151.112 22 <nil> <nil>}
	I0318 12:17:27.971354    2644 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0318 12:17:28.111574    2644 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0318 12:17:28.111676    2644 buildroot.go:70] root file system type: tmpfs
	I0318 12:17:28.111966    2644 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0318 12:17:28.112116    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:17:30.338896    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:17:30.338896    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:17:30.338896    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:17:32.967668    2644 main.go:141] libmachine: [stdout =====>] : 172.25.151.112
	
	I0318 12:17:32.967720    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:17:32.973099    2644 main.go:141] libmachine: Using SSH client type: native
	I0318 12:17:32.973397    2644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.151.112 22 <nil> <nil>}
	I0318 12:17:32.973397    2644 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0318 12:17:33.131008    2644 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0318 12:17:33.131564    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:17:35.341476    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:17:35.341559    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:17:35.341634    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:17:37.977842    2644 main.go:141] libmachine: [stdout =====>] : 172.25.151.112
	
	I0318 12:17:37.977999    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:17:37.983032    2644 main.go:141] libmachine: Using SSH client type: native
	I0318 12:17:37.983766    2644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.151.112 22 <nil> <nil>}
	I0318 12:17:37.983766    2644 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0318 12:17:40.183827    2644 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0318 12:17:40.183827    2644 machine.go:97] duration metric: took 47.5909631s to provisionDockerMachine
	I0318 12:17:40.183827    2644 client.go:171] duration metric: took 2m1.1208287s to LocalClient.Create
	I0318 12:17:40.183827    2644 start.go:167] duration metric: took 2m1.1217345s to libmachine.API.Create "multinode-642600"
	I0318 12:17:40.183827    2644 start.go:293] postStartSetup for "multinode-642600" (driver="hyperv")
	I0318 12:17:40.183827    2644 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 12:17:40.196619    2644 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 12:17:40.196619    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:17:42.339990    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:17:42.339990    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:17:42.340312    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:17:45.008993    2644 main.go:141] libmachine: [stdout =====>] : 172.25.151.112
	
	I0318 12:17:45.008993    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:17:45.009610    2644 sshutil.go:53] new ssh client: &{IP:172.25.151.112 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-642600\id_rsa Username:docker}
	I0318 12:17:45.112157    2644 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9155088s)
	I0318 12:17:45.125491    2644 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 12:17:45.133067    2644 command_runner.go:130] > NAME=Buildroot
	I0318 12:17:45.133067    2644 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0318 12:17:45.133067    2644 command_runner.go:130] > ID=buildroot
	I0318 12:17:45.133067    2644 command_runner.go:130] > VERSION_ID=2023.02.9
	I0318 12:17:45.133067    2644 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0318 12:17:45.133211    2644 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 12:17:45.133255    2644 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0318 12:17:45.133310    2644 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0318 12:17:45.134798    2644 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem -> 91202.pem in /etc/ssl/certs
	I0318 12:17:45.134873    2644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem -> /etc/ssl/certs/91202.pem
	I0318 12:17:45.150499    2644 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 12:17:45.169379    2644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem --> /etc/ssl/certs/91202.pem (1708 bytes)
	I0318 12:17:45.216572    2644 start.go:296] duration metric: took 5.0327147s for postStartSetup
	I0318 12:17:45.219840    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:17:47.422391    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:17:47.422438    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:17:47.422890    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:17:50.069627    2644 main.go:141] libmachine: [stdout =====>] : 172.25.151.112
	
	I0318 12:17:50.069627    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:17:50.070289    2644 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\config.json ...
	I0318 12:17:50.073479    2644 start.go:128] duration metric: took 2m11.016418s to createHost
	I0318 12:17:50.073581    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:17:52.248459    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:17:52.248459    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:17:52.249186    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:17:54.858991    2644 main.go:141] libmachine: [stdout =====>] : 172.25.151.112
	
	I0318 12:17:54.858991    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:17:54.865331    2644 main.go:141] libmachine: Using SSH client type: native
	I0318 12:17:54.865716    2644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.151.112 22 <nil> <nil>}
	I0318 12:17:54.865716    2644 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 12:17:55.008865    2644 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710764275.006893972
	
	I0318 12:17:55.008962    2644 fix.go:216] guest clock: 1710764275.006893972
	I0318 12:17:55.008962    2644 fix.go:229] Guest: 2024-03-18 12:17:55.006893972 +0000 UTC Remote: 2024-03-18 12:17:50.0735245 +0000 UTC m=+136.857184301 (delta=4.933369472s)
	I0318 12:17:55.009088    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:17:57.206470    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:17:57.206470    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:17:57.206859    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:17:59.855664    2644 main.go:141] libmachine: [stdout =====>] : 172.25.151.112
	
	I0318 12:17:59.856653    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:17:59.863161    2644 main.go:141] libmachine: Using SSH client type: native
	I0318 12:17:59.863270    2644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.151.112 22 <nil> <nil>}
	I0318 12:17:59.863270    2644 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1710764275
	I0318 12:18:00.022184    2644 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Mar 18 12:17:55 UTC 2024
	
	I0318 12:18:00.022184    2644 fix.go:236] clock set: Mon Mar 18 12:17:55 UTC 2024
	 (err=<nil>)
	I0318 12:18:00.022184    2644 start.go:83] releasing machines lock for "multinode-642600", held for 2m20.9650624s
	I0318 12:18:00.022184    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:18:02.237866    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:18:02.238661    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:18:02.238723    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:18:04.857578    2644 main.go:141] libmachine: [stdout =====>] : 172.25.151.112
	
	I0318 12:18:04.858583    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:18:04.862623    2644 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 12:18:04.862776    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:18:04.872690    2644 ssh_runner.go:195] Run: cat /version.json
	I0318 12:18:04.872690    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:18:07.157966    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:18:07.158109    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:18:07.158167    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:18:07.175498    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:18:07.175498    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:18:07.175674    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:18:09.951808    2644 main.go:141] libmachine: [stdout =====>] : 172.25.151.112
	
	I0318 12:18:09.951872    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:18:09.952735    2644 sshutil.go:53] new ssh client: &{IP:172.25.151.112 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-642600\id_rsa Username:docker}
	I0318 12:18:09.963599    2644 main.go:141] libmachine: [stdout =====>] : 172.25.151.112
	
	I0318 12:18:09.963599    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:18:09.964406    2644 sshutil.go:53] new ssh client: &{IP:172.25.151.112 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-642600\id_rsa Username:docker}
	I0318 12:18:10.204523    2644 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0318 12:18:10.204523    2644 command_runner.go:130] > {"iso_version": "v1.32.1-1710520390-17991", "kicbase_version": "v0.0.42-1710284843-18375", "minikube_version": "v1.32.0", "commit": "3dd306d082737a9ddf335108b42c9fcb2ad84298"}
	I0318 12:18:10.204523    2644 ssh_runner.go:235] Completed: cat /version.json: (5.3318003s)
	I0318 12:18:10.204523    2644 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3418674s)
	I0318 12:18:10.217150    2644 ssh_runner.go:195] Run: systemctl --version
	I0318 12:18:10.226629    2644 command_runner.go:130] > systemd 252 (252)
	I0318 12:18:10.226629    2644 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0318 12:18:10.238684    2644 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0318 12:18:10.247327    2644 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0318 12:18:10.248175    2644 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 12:18:10.261016    2644 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 12:18:10.292732    2644 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0318 12:18:10.293259    2644 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 12:18:10.293308    2644 start.go:494] detecting cgroup driver to use...
	I0318 12:18:10.293543    2644 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 12:18:10.329366    2644 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0318 12:18:10.342048    2644 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0318 12:18:10.377162    2644 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0318 12:18:10.398580    2644 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0318 12:18:10.413887    2644 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0318 12:18:10.446204    2644 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 12:18:10.480893    2644 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0318 12:18:10.514888    2644 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 12:18:10.548442    2644 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 12:18:10.579065    2644 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0318 12:18:10.609313    2644 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 12:18:10.628859    2644 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0318 12:18:10.641688    2644 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 12:18:10.672143    2644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:18:10.876896    2644 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0318 12:18:10.910146    2644 start.go:494] detecting cgroup driver to use...
	I0318 12:18:10.923748    2644 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0318 12:18:10.948454    2644 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0318 12:18:10.948454    2644 command_runner.go:130] > [Unit]
	I0318 12:18:10.948454    2644 command_runner.go:130] > Description=Docker Application Container Engine
	I0318 12:18:10.948454    2644 command_runner.go:130] > Documentation=https://docs.docker.com
	I0318 12:18:10.948454    2644 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0318 12:18:10.948454    2644 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0318 12:18:10.948454    2644 command_runner.go:130] > StartLimitBurst=3
	I0318 12:18:10.948454    2644 command_runner.go:130] > StartLimitIntervalSec=60
	I0318 12:18:10.948454    2644 command_runner.go:130] > [Service]
	I0318 12:18:10.948454    2644 command_runner.go:130] > Type=notify
	I0318 12:18:10.948454    2644 command_runner.go:130] > Restart=on-failure
	I0318 12:18:10.948454    2644 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0318 12:18:10.948670    2644 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0318 12:18:10.948670    2644 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0318 12:18:10.948670    2644 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0318 12:18:10.948670    2644 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0318 12:18:10.948670    2644 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0318 12:18:10.948670    2644 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0318 12:18:10.948670    2644 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0318 12:18:10.948670    2644 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0318 12:18:10.948670    2644 command_runner.go:130] > ExecStart=
	I0318 12:18:10.948670    2644 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0318 12:18:10.948670    2644 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0318 12:18:10.948861    2644 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0318 12:18:10.948861    2644 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0318 12:18:10.948861    2644 command_runner.go:130] > LimitNOFILE=infinity
	I0318 12:18:10.948861    2644 command_runner.go:130] > LimitNPROC=infinity
	I0318 12:18:10.948861    2644 command_runner.go:130] > LimitCORE=infinity
	I0318 12:18:10.948861    2644 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0318 12:18:10.948861    2644 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0318 12:18:10.948963    2644 command_runner.go:130] > TasksMax=infinity
	I0318 12:18:10.948963    2644 command_runner.go:130] > TimeoutStartSec=0
	I0318 12:18:10.949064    2644 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0318 12:18:10.949064    2644 command_runner.go:130] > Delegate=yes
	I0318 12:18:10.949064    2644 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0318 12:18:10.949064    2644 command_runner.go:130] > KillMode=process
	I0318 12:18:10.949064    2644 command_runner.go:130] > [Install]
	I0318 12:18:10.949064    2644 command_runner.go:130] > WantedBy=multi-user.target
	I0318 12:18:10.961010    2644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 12:18:10.998585    2644 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 12:18:11.040196    2644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 12:18:11.076195    2644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 12:18:11.111465    2644 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0318 12:18:11.177538    2644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 12:18:11.200014    2644 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 12:18:11.234201    2644 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0318 12:18:11.247473    2644 ssh_runner.go:195] Run: which cri-dockerd
	I0318 12:18:11.255609    2644 command_runner.go:130] > /usr/bin/cri-dockerd
	I0318 12:18:11.269404    2644 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0318 12:18:11.288255    2644 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0318 12:18:11.335958    2644 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0318 12:18:11.547165    2644 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0318 12:18:11.741627    2644 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0318 12:18:11.741893    2644 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0318 12:18:11.793935    2644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:18:12.009073    2644 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 12:18:14.561702    2644 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5526133s)
	I0318 12:18:14.572614    2644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0318 12:18:14.610245    2644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 12:18:14.651280    2644 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0318 12:18:14.873217    2644 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0318 12:18:15.089357    2644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:18:15.321007    2644 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0318 12:18:15.362871    2644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 12:18:15.402444    2644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:18:15.613267    2644 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0318 12:18:15.722690    2644 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0318 12:18:15.737088    2644 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0318 12:18:15.746897    2644 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0318 12:18:15.747109    2644 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0318 12:18:15.747109    2644 command_runner.go:130] > Device: 0,22	Inode: 883         Links: 1
	I0318 12:18:15.747168    2644 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0318 12:18:15.747168    2644 command_runner.go:130] > Access: 2024-03-18 12:18:15.640555876 +0000
	I0318 12:18:15.747168    2644 command_runner.go:130] > Modify: 2024-03-18 12:18:15.640555876 +0000
	I0318 12:18:15.747168    2644 command_runner.go:130] > Change: 2024-03-18 12:18:15.643555878 +0000
	I0318 12:18:15.747168    2644 command_runner.go:130] >  Birth: -
	I0318 12:18:15.747263    2644 start.go:562] Will wait 60s for crictl version
	I0318 12:18:15.759890    2644 ssh_runner.go:195] Run: which crictl
	I0318 12:18:15.765356    2644 command_runner.go:130] > /usr/bin/crictl
	I0318 12:18:15.779139    2644 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 12:18:15.861042    2644 command_runner.go:130] > Version:  0.1.0
	I0318 12:18:15.861042    2644 command_runner.go:130] > RuntimeName:  docker
	I0318 12:18:15.861042    2644 command_runner.go:130] > RuntimeVersion:  25.0.4
	I0318 12:18:15.861042    2644 command_runner.go:130] > RuntimeApiVersion:  v1
	I0318 12:18:15.861042    2644 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.4
	RuntimeApiVersion:  v1
	I0318 12:18:15.872107    2644 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 12:18:15.907842    2644 command_runner.go:130] > 25.0.4
	I0318 12:18:15.921077    2644 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 12:18:15.959996    2644 command_runner.go:130] > 25.0.4
	I0318 12:18:15.964461    2644 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	I0318 12:18:15.964849    2644 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0318 12:18:15.973157    2644 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0318 12:18:15.973157    2644 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0318 12:18:15.973157    2644 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0318 12:18:15.973157    2644 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:ae:0d:2c Flags:up|broadcast|multicast|running}
	I0318 12:18:15.976436    2644 ip.go:210] interface addr: fe80::f8a6:d6b6:cc4:1ba0/64
	I0318 12:18:15.976436    2644 ip.go:210] interface addr: 172.25.144.1/20
	I0318 12:18:15.987657    2644 ssh_runner.go:195] Run: grep 172.25.144.1	host.minikube.internal$ /etc/hosts
	I0318 12:18:15.994405    2644 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 12:18:16.016513    2644 kubeadm.go:877] updating cluster {Name:multinode-642600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-642600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.151.112 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 12:18:16.016665    2644 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 12:18:16.027233    2644 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 12:18:16.051855    2644 docker.go:685] Got preloaded images: 
	I0318 12:18:16.051921    2644 docker.go:691] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0318 12:18:16.069136    2644 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0318 12:18:16.086497    2644 command_runner.go:139] > {"Repositories":{}}
	I0318 12:18:16.100419    2644 ssh_runner.go:195] Run: which lz4
	I0318 12:18:16.105773    2644 command_runner.go:130] > /usr/bin/lz4
	I0318 12:18:16.105901    2644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0318 12:18:16.118299    2644 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 12:18:16.123666    2644 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 12:18:16.124167    2644 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 12:18:16.124167    2644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I0318 12:18:18.522056    2644 docker.go:649] duration metric: took 2.4157093s to copy over tarball
	I0318 12:18:18.536622    2644 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 12:18:28.858613    2644 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (10.3218338s)
	I0318 12:18:28.858694    2644 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 12:18:28.936441    2644 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0318 12:18:28.958396    2644 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.9-0":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3":"sha256:73deb9a3f702532592a4167455f8
bf2e5f5d900bcc959ba2fd2d35c321de1af9"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.28.4":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.28.4":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.28.4":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021
a3a2899304398e"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.28.4":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0318 12:18:28.958396    2644 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0318 12:18:29.008405    2644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:18:29.232848    2644 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 12:18:32.207906    2644 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.9750397s)
	I0318 12:18:32.219782    2644 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 12:18:32.251605    2644 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0318 12:18:32.251651    2644 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0318 12:18:32.251682    2644 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0318 12:18:32.251682    2644 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0318 12:18:32.251682    2644 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0318 12:18:32.251682    2644 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0318 12:18:32.251682    2644 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0318 12:18:32.251682    2644 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 12:18:32.251682    2644 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0318 12:18:32.251682    2644 cache_images.go:84] Images are preloaded, skipping loading
	I0318 12:18:32.251682    2644 kubeadm.go:928] updating node { 172.25.151.112 8443 v1.28.4 docker true true} ...
	I0318 12:18:32.252212    2644 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-642600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.151.112
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-642600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 12:18:32.262452    2644 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0318 12:18:32.299515    2644 command_runner.go:130] > cgroupfs
	I0318 12:18:32.300699    2644 cni.go:84] Creating CNI manager for ""
	I0318 12:18:32.300756    2644 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0318 12:18:32.300756    2644 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 12:18:32.300850    2644 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.25.151.112 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-642600 NodeName:multinode-642600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.25.151.112"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.25.151.112 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 12:18:32.301027    2644 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.25.151.112
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-642600"
	  kubeletExtraArgs:
	    node-ip: 172.25.151.112
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.25.151.112"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 12:18:32.315083    2644 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 12:18:32.334632    2644 command_runner.go:130] > kubeadm
	I0318 12:18:32.334731    2644 command_runner.go:130] > kubectl
	I0318 12:18:32.334731    2644 command_runner.go:130] > kubelet
	I0318 12:18:32.334731    2644 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 12:18:32.347188    2644 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 12:18:32.364421    2644 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0318 12:18:32.401060    2644 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 12:18:32.432723    2644 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0318 12:18:32.475708    2644 ssh_runner.go:195] Run: grep 172.25.151.112	control-plane.minikube.internal$ /etc/hosts
	I0318 12:18:32.484158    2644 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.151.112	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 12:18:32.519557    2644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:18:32.746997    2644 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 12:18:32.777984    2644 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600 for IP: 172.25.151.112
	I0318 12:18:32.777984    2644 certs.go:194] generating shared ca certs ...
	I0318 12:18:32.778104    2644 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:18:32.778762    2644 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0318 12:18:32.779113    2644 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0318 12:18:32.779200    2644 certs.go:256] generating profile certs ...
	I0318 12:18:32.779533    2644 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\client.key
	I0318 12:18:32.779533    2644 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\client.crt with IP's: []
	I0318 12:18:33.275235    2644 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\client.crt ...
	I0318 12:18:33.275235    2644 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\client.crt: {Name:mk894c9e1da40e72a513efd407953fa4682931df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:18:33.277320    2644 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\client.key ...
	I0318 12:18:33.277320    2644 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\client.key: {Name:mkaf4fb73bddd05e46997dc4d366d17f1db734aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:18:33.278439    2644 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\apiserver.key.7f164ed8
	I0318 12:18:33.278439    2644 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\apiserver.crt.7f164ed8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.151.112]
	I0318 12:18:33.647418    2644 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\apiserver.crt.7f164ed8 ...
	I0318 12:18:33.647418    2644 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\apiserver.crt.7f164ed8: {Name:mkfc2e143acb6fd9ae162a48977a2dd512e2deaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:18:33.648742    2644 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\apiserver.key.7f164ed8 ...
	I0318 12:18:33.648742    2644 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\apiserver.key.7f164ed8: {Name:mk8dc19c19f45f0376d5f4a17dd03d59d813dde9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:18:33.649136    2644 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\apiserver.crt.7f164ed8 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\apiserver.crt
	I0318 12:18:33.668113    2644 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\apiserver.key.7f164ed8 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\apiserver.key
	I0318 12:18:33.669108    2644 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\proxy-client.key
	I0318 12:18:33.669108    2644 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\proxy-client.crt with IP's: []
	I0318 12:18:34.354693    2644 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\proxy-client.crt ...
	I0318 12:18:34.354693    2644 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\proxy-client.crt: {Name:mk5cb53b8072b5fda063862ec150341f20f6c7a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:18:34.355938    2644 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\proxy-client.key ...
	I0318 12:18:34.356851    2644 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\proxy-client.key: {Name:mkba4f3bb16755e8367209c0ab38067a5c38b8a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:18:34.357875    2644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 12:18:34.358566    2644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0318 12:18:34.358566    2644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 12:18:34.359015    2644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 12:18:34.359211    2644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 12:18:34.359531    2644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 12:18:34.359739    2644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 12:18:34.368978    2644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 12:18:34.369399    2644 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9120.pem (1338 bytes)
	W0318 12:18:34.370140    2644 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9120_empty.pem, impossibly tiny 0 bytes
	I0318 12:18:34.370140    2644 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0318 12:18:34.370487    2644 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0318 12:18:34.370768    2644 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0318 12:18:34.371068    2644 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0318 12:18:34.371384    2644 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem (1708 bytes)
	I0318 12:18:34.372174    2644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:18:34.372346    2644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9120.pem -> /usr/share/ca-certificates/9120.pem
	I0318 12:18:34.372546    2644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem -> /usr/share/ca-certificates/91202.pem
	I0318 12:18:34.372715    2644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 12:18:34.424966    2644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 12:18:34.474802    2644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 12:18:34.523541    2644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 12:18:34.570238    2644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 12:18:34.619304    2644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 12:18:34.669449    2644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 12:18:34.712969    2644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 12:18:34.758497    2644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 12:18:34.816540    2644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9120.pem --> /usr/share/ca-certificates/9120.pem (1338 bytes)
	I0318 12:18:34.869268    2644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem --> /usr/share/ca-certificates/91202.pem (1708 bytes)
	I0318 12:18:34.919624    2644 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 12:18:34.968888    2644 ssh_runner.go:195] Run: openssl version
	I0318 12:18:34.979856    2644 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0318 12:18:34.992973    2644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 12:18:35.024059    2644 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:18:35.031468    2644 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 18 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:18:35.031532    2644 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:18:35.044841    2644 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:18:35.053156    2644 command_runner.go:130] > b5213941
	I0318 12:18:35.066946    2644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 12:18:35.102592    2644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9120.pem && ln -fs /usr/share/ca-certificates/9120.pem /etc/ssl/certs/9120.pem"
	I0318 12:18:35.137792    2644 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9120.pem
	I0318 12:18:35.148480    2644 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 18 10:53 /usr/share/ca-certificates/9120.pem
	I0318 12:18:35.148514    2644 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 10:53 /usr/share/ca-certificates/9120.pem
	I0318 12:18:35.161046    2644 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9120.pem
	I0318 12:18:35.169410    2644 command_runner.go:130] > 51391683
	I0318 12:18:35.181886    2644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9120.pem /etc/ssl/certs/51391683.0"
	I0318 12:18:35.214174    2644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/91202.pem && ln -fs /usr/share/ca-certificates/91202.pem /etc/ssl/certs/91202.pem"
	I0318 12:18:35.249571    2644 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91202.pem
	I0318 12:18:35.255478    2644 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 18 10:53 /usr/share/ca-certificates/91202.pem
	I0318 12:18:35.255478    2644 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 10:53 /usr/share/ca-certificates/91202.pem
	I0318 12:18:35.266864    2644 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91202.pem
	I0318 12:18:35.275200    2644 command_runner.go:130] > 3ec20f2e
	I0318 12:18:35.288684    2644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/91202.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 12:18:35.319473    2644 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 12:18:35.325493    2644 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 12:18:35.326645    2644 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 12:18:35.327128    2644 kubeadm.go:391] StartCluster: {Name:multinode-642600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.2
8.4 ClusterName:multinode-642600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.151.112 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 12:18:35.337010    2644 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 12:18:35.376867    2644 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0318 12:18:35.399129    2644 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0318 12:18:35.399129    2644 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0318 12:18:35.400127    2644 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0318 12:18:35.412121    2644 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 12:18:35.443112    2644 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 12:18:35.459287    2644 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0318 12:18:35.459346    2644 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0318 12:18:35.459534    2644 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0318 12:18:35.460180    2644 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 12:18:35.460938    2644 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 12:18:35.460938    2644 kubeadm.go:156] found existing configuration files:
	
	I0318 12:18:35.474127    2644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 12:18:35.494100    2644 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 12:18:35.494100    2644 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 12:18:35.506086    2644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 12:18:35.539927    2644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 12:18:35.557934    2644 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 12:18:35.558675    2644 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 12:18:35.570919    2644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 12:18:35.600887    2644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 12:18:35.619570    2644 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 12:18:35.619570    2644 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 12:18:35.631488    2644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 12:18:35.665283    2644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 12:18:35.686819    2644 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 12:18:35.686908    2644 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 12:18:35.699013    2644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 12:18:35.717591    2644 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 12:18:36.188797    2644 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 12:18:36.188797    2644 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 12:18:50.952071    2644 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 12:18:50.953090    2644 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I0318 12:18:50.953090    2644 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 12:18:50.953090    2644 command_runner.go:130] > [preflight] Running pre-flight checks
	I0318 12:18:50.953090    2644 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 12:18:50.953090    2644 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 12:18:50.953090    2644 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 12:18:50.953090    2644 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 12:18:50.953090    2644 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 12:18:50.953090    2644 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 12:18:50.953090    2644 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 12:18:50.953090    2644 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 12:18:50.956067    2644 out.go:204]   - Generating certificates and keys ...
	I0318 12:18:50.957084    2644 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 12:18:50.957084    2644 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0318 12:18:50.957084    2644 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 12:18:50.957084    2644 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0318 12:18:50.957084    2644 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0318 12:18:50.957084    2644 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0318 12:18:50.957084    2644 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0318 12:18:50.957084    2644 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0318 12:18:50.958097    2644 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0318 12:18:50.958097    2644 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0318 12:18:50.958097    2644 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0318 12:18:50.958097    2644 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0318 12:18:50.958097    2644 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0318 12:18:50.958097    2644 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0318 12:18:50.958097    2644 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-642600] and IPs [172.25.151.112 127.0.0.1 ::1]
	I0318 12:18:50.958097    2644 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-642600] and IPs [172.25.151.112 127.0.0.1 ::1]
	I0318 12:18:50.958097    2644 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0318 12:18:50.958097    2644 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0318 12:18:50.959075    2644 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-642600] and IPs [172.25.151.112 127.0.0.1 ::1]
	I0318 12:18:50.959075    2644 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-642600] and IPs [172.25.151.112 127.0.0.1 ::1]
	I0318 12:18:50.959075    2644 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0318 12:18:50.959075    2644 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0318 12:18:50.959075    2644 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0318 12:18:50.959075    2644 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0318 12:18:50.959075    2644 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0318 12:18:50.959075    2644 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0318 12:18:50.959075    2644 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 12:18:50.959075    2644 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 12:18:50.959075    2644 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 12:18:50.959075    2644 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 12:18:50.959075    2644 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 12:18:50.959075    2644 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 12:18:50.960090    2644 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 12:18:50.960090    2644 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 12:18:50.960090    2644 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 12:18:50.960090    2644 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 12:18:50.960090    2644 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 12:18:50.960090    2644 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 12:18:50.960090    2644 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 12:18:50.960090    2644 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 12:18:50.980062    2644 out.go:204]   - Booting up control plane ...
	I0318 12:18:50.980590    2644 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 12:18:50.980590    2644 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 12:18:50.981097    2644 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 12:18:50.981097    2644 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 12:18:50.981292    2644 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 12:18:50.981292    2644 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 12:18:50.981292    2644 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 12:18:50.981292    2644 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 12:18:50.981292    2644 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 12:18:50.981292    2644 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 12:18:50.982132    2644 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 12:18:50.982132    2644 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0318 12:18:50.982376    2644 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 12:18:50.982376    2644 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 12:18:50.982773    2644 kubeadm.go:309] [apiclient] All control plane components are healthy after 9.006068 seconds
	I0318 12:18:50.982773    2644 command_runner.go:130] > [apiclient] All control plane components are healthy after 9.006068 seconds
	I0318 12:18:50.982962    2644 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 12:18:50.983056    2644 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 12:18:50.983285    2644 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 12:18:50.983347    2644 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 12:18:50.983521    2644 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0318 12:18:50.983521    2644 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 12:18:50.983629    2644 command_runner.go:130] > [mark-control-plane] Marking the node multinode-642600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 12:18:50.983629    2644 kubeadm.go:309] [mark-control-plane] Marking the node multinode-642600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 12:18:50.983629    2644 command_runner.go:130] > [bootstrap-token] Using token: mqfzfu.5w8s7oaqdif2h49e
	I0318 12:18:50.983629    2644 kubeadm.go:309] [bootstrap-token] Using token: mqfzfu.5w8s7oaqdif2h49e
	I0318 12:18:50.987860    2644 out.go:204]   - Configuring RBAC rules ...
	I0318 12:18:50.988499    2644 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 12:18:50.988499    2644 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 12:18:50.988499    2644 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 12:18:50.988499    2644 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 12:18:50.988499    2644 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 12:18:50.988499    2644 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 12:18:50.989159    2644 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 12:18:50.989159    2644 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 12:18:50.990061    2644 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 12:18:50.990061    2644 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 12:18:50.990061    2644 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 12:18:50.990061    2644 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 12:18:50.991083    2644 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 12:18:50.991083    2644 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 12:18:50.991083    2644 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 12:18:50.991083    2644 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0318 12:18:50.991083    2644 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0318 12:18:50.991083    2644 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 12:18:50.991083    2644 kubeadm.go:309] 
	I0318 12:18:50.991083    2644 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0318 12:18:50.991083    2644 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 12:18:50.991083    2644 kubeadm.go:309] 
	I0318 12:18:50.991083    2644 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0318 12:18:50.991083    2644 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 12:18:50.991083    2644 kubeadm.go:309] 
	I0318 12:18:50.991083    2644 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0318 12:18:50.992091    2644 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 12:18:50.992091    2644 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 12:18:50.992091    2644 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 12:18:50.992091    2644 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 12:18:50.992091    2644 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 12:18:50.992091    2644 kubeadm.go:309] 
	I0318 12:18:50.992091    2644 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 12:18:50.992091    2644 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0318 12:18:50.992091    2644 kubeadm.go:309] 
	I0318 12:18:50.992091    2644 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 12:18:50.992091    2644 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 12:18:50.992091    2644 kubeadm.go:309] 
	I0318 12:18:50.992091    2644 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 12:18:50.992091    2644 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0318 12:18:50.992091    2644 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 12:18:50.992091    2644 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 12:18:50.993063    2644 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 12:18:50.993063    2644 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 12:18:50.993063    2644 kubeadm.go:309] 
	I0318 12:18:50.993063    2644 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 12:18:50.993063    2644 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0318 12:18:50.993063    2644 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0318 12:18:50.993063    2644 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 12:18:50.993063    2644 kubeadm.go:309] 
	I0318 12:18:50.993063    2644 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token mqfzfu.5w8s7oaqdif2h49e \
	I0318 12:18:50.993063    2644 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token mqfzfu.5w8s7oaqdif2h49e \
	I0318 12:18:50.994093    2644 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1315b336657f971045d436062c4002c5bfe51c3e72afc075449943f75abc0cef \
	I0318 12:18:50.994093    2644 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:1315b336657f971045d436062c4002c5bfe51c3e72afc075449943f75abc0cef \
	I0318 12:18:50.994093    2644 kubeadm.go:309] 	--control-plane 
	I0318 12:18:50.994093    2644 command_runner.go:130] > 	--control-plane 
	I0318 12:18:50.994093    2644 kubeadm.go:309] 
	I0318 12:18:50.994093    2644 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0318 12:18:50.994093    2644 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 12:18:50.994093    2644 kubeadm.go:309] 
	I0318 12:18:50.994093    2644 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token mqfzfu.5w8s7oaqdif2h49e \
	I0318 12:18:50.994093    2644 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token mqfzfu.5w8s7oaqdif2h49e \
	I0318 12:18:50.994093    2644 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1315b336657f971045d436062c4002c5bfe51c3e72afc075449943f75abc0cef 
	I0318 12:18:50.994093    2644 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:1315b336657f971045d436062c4002c5bfe51c3e72afc075449943f75abc0cef 
	I0318 12:18:50.994093    2644 cni.go:84] Creating CNI manager for ""
	I0318 12:18:50.994093    2644 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0318 12:18:50.999070    2644 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0318 12:18:51.013071    2644 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0318 12:18:51.022013    2644 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0318 12:18:51.022013    2644 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0318 12:18:51.022013    2644 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0318 12:18:51.022013    2644 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0318 12:18:51.022013    2644 command_runner.go:130] > Access: 2024-03-18 12:16:47.255187900 +0000
	I0318 12:18:51.022013    2644 command_runner.go:130] > Modify: 2024-03-15 22:00:10.000000000 +0000
	I0318 12:18:51.022013    2644 command_runner.go:130] > Change: 2024-03-18 12:16:37.961000000 +0000
	I0318 12:18:51.022294    2644 command_runner.go:130] >  Birth: -
	I0318 12:18:51.024715    2644 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0318 12:18:51.024715    2644 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0318 12:18:51.112588    2644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0318 12:18:52.772457    2644 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0318 12:18:52.772457    2644 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0318 12:18:52.772457    2644 command_runner.go:130] > serviceaccount/kindnet created
	I0318 12:18:52.772457    2644 command_runner.go:130] > daemonset.apps/kindnet created
	I0318 12:18:52.773528    2644 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.6609303s)
	I0318 12:18:52.773528    2644 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 12:18:52.787453    2644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:18:52.787453    2644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-642600 minikube.k8s.io/updated_at=2024_03_18T12_18_52_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd minikube.k8s.io/name=multinode-642600 minikube.k8s.io/primary=true
	I0318 12:18:52.795064    2644 command_runner.go:130] > -16
	I0318 12:18:52.795861    2644 ops.go:34] apiserver oom_adj: -16
	I0318 12:18:52.988379    2644 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0318 12:18:52.988567    2644 command_runner.go:130] > node/multinode-642600 labeled
	I0318 12:18:53.000059    2644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:18:53.129730    2644 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0318 12:18:53.501531    2644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:18:53.619731    2644 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0318 12:18:54.007532    2644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:18:54.124702    2644 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0318 12:18:54.509972    2644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:18:54.633907    2644 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0318 12:18:55.019299    2644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:18:55.159112    2644 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0318 12:18:55.501467    2644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:18:55.633093    2644 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0318 12:18:56.007619    2644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:18:56.149452    2644 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0318 12:18:56.510093    2644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:18:56.646617    2644 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0318 12:18:57.014836    2644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:18:57.150049    2644 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0318 12:18:57.502587    2644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:18:57.640087    2644 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0318 12:18:58.006813    2644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:18:58.157320    2644 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0318 12:18:58.514967    2644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:18:58.649976    2644 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0318 12:18:59.015648    2644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:18:59.142689    2644 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0318 12:18:59.502532    2644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:18:59.629973    2644 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0318 12:19:00.009553    2644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:19:00.151734    2644 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0318 12:19:00.504755    2644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:19:00.625751    2644 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0318 12:19:01.007274    2644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:19:01.144978    2644 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0318 12:19:01.508747    2644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:19:01.691393    2644 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0318 12:19:02.015861    2644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:19:02.158631    2644 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0318 12:19:02.506788    2644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:19:02.688735    2644 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0318 12:19:03.012957    2644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:19:03.269765    2644 command_runner.go:130] > NAME      SECRETS   AGE
	I0318 12:19:03.269950    2644 command_runner.go:130] > default   0         1s
	I0318 12:19:03.270038    2644 kubeadm.go:1107] duration metric: took 10.4964456s to wait for elevateKubeSystemPrivileges
	W0318 12:19:03.270107    2644 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 12:19:03.270107    2644 kubeadm.go:393] duration metric: took 27.9428248s to StartCluster
	I0318 12:19:03.270107    2644 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:19:03.270107    2644 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0318 12:19:03.272301    2644 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:19:03.274088    2644 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0318 12:19:03.274148    2644 start.go:234] Will wait 6m0s for node &{Name: IP:172.25.151.112 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 12:19:03.274148    2644 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 12:19:03.274332    2644 addons.go:69] Setting storage-provisioner=true in profile "multinode-642600"
	I0318 12:19:03.276743    2644 out.go:177] * Verifying Kubernetes components...
	I0318 12:19:03.274595    2644 addons.go:234] Setting addon storage-provisioner=true in "multinode-642600"
	I0318 12:19:03.274595    2644 addons.go:69] Setting default-storageclass=true in profile "multinode-642600"
	I0318 12:19:03.274987    2644 config.go:182] Loaded profile config "multinode-642600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 12:19:03.281497    2644 host.go:66] Checking if "multinode-642600" exists ...
	I0318 12:19:03.281497    2644 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-642600"
	I0318 12:19:03.281797    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:19:03.282820    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:19:03.295921    2644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:19:03.661961    2644 command_runner.go:130] > apiVersion: v1
	I0318 12:19:03.662035    2644 command_runner.go:130] > data:
	I0318 12:19:03.662035    2644 command_runner.go:130] >   Corefile: |
	I0318 12:19:03.662035    2644 command_runner.go:130] >     .:53 {
	I0318 12:19:03.662035    2644 command_runner.go:130] >         errors
	I0318 12:19:03.662035    2644 command_runner.go:130] >         health {
	I0318 12:19:03.662035    2644 command_runner.go:130] >            lameduck 5s
	I0318 12:19:03.662035    2644 command_runner.go:130] >         }
	I0318 12:19:03.662155    2644 command_runner.go:130] >         ready
	I0318 12:19:03.662376    2644 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0318 12:19:03.662567    2644 command_runner.go:130] >            pods insecure
	I0318 12:19:03.662762    2644 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0318 12:19:03.662911    2644 command_runner.go:130] >            ttl 30
	I0318 12:19:03.662911    2644 command_runner.go:130] >         }
	I0318 12:19:03.662911    2644 command_runner.go:130] >         prometheus :9153
	I0318 12:19:03.662911    2644 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0318 12:19:03.663011    2644 command_runner.go:130] >            max_concurrent 1000
	I0318 12:19:03.663011    2644 command_runner.go:130] >         }
	I0318 12:19:03.663011    2644 command_runner.go:130] >         cache 30
	I0318 12:19:03.663011    2644 command_runner.go:130] >         loop
	I0318 12:19:03.663146    2644 command_runner.go:130] >         reload
	I0318 12:19:03.663146    2644 command_runner.go:130] >         loadbalance
	I0318 12:19:03.663146    2644 command_runner.go:130] >     }
	I0318 12:19:03.663194    2644 command_runner.go:130] > kind: ConfigMap
	I0318 12:19:03.663194    2644 command_runner.go:130] > metadata:
	I0318 12:19:03.663241    2644 command_runner.go:130] >   creationTimestamp: "2024-03-18T12:18:50Z"
	I0318 12:19:03.663322    2644 command_runner.go:130] >   name: coredns
	I0318 12:19:03.663367    2644 command_runner.go:130] >   namespace: kube-system
	I0318 12:19:03.663419    2644 command_runner.go:130] >   resourceVersion: "266"
	I0318 12:19:03.663419    2644 command_runner.go:130] >   uid: 1d948610-4040-4ec0-8948-bcd8ff75ce09
	I0318 12:19:03.663921    2644 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.25.144.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0318 12:19:03.797764    2644 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 12:19:04.543989    2644 command_runner.go:130] > configmap/coredns replaced
	I0318 12:19:04.543989    2644 start.go:948] {"host.minikube.internal": 172.25.144.1} host record injected into CoreDNS's ConfigMap
	I0318 12:19:04.545263    2644 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0318 12:19:04.545349    2644 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0318 12:19:04.545349    2644 kapi.go:59] client config for multinode-642600: &rest.Config{Host:"https://172.25.151.112:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-642600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-642600\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x226b2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 12:19:04.546187    2644 kapi.go:59] client config for multinode-642600: &rest.Config{Host:"https://172.25.151.112:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-642600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-642600\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x226b2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 12:19:04.547305    2644 node_ready.go:35] waiting up to 6m0s for node "multinode-642600" to be "Ready" ...
	I0318 12:19:04.547305    2644 cert_rotation.go:137] Starting client certificate rotation controller
	I0318 12:19:04.547305    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:19:04.547305    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:04.547305    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:04.547305    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:04.549194    2644 round_trippers.go:463] GET https://172.25.151.112:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0318 12:19:04.549194    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:04.549194    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:04.549194    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:04.572163    2644 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0318 12:19:04.572163    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:04.572163    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:04.572163    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:04.572163    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:04.572163    2644 round_trippers.go:580]     Content-Length: 291
	I0318 12:19:04.572163    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:04 GMT
	I0318 12:19:04.572163    2644 round_trippers.go:580]     Audit-Id: 5559c792-930c-486e-8d37-86e397dc9d33
	I0318 12:19:04.572163    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:04.574162    2644 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"2483ecbb-9d42-44fc-b1b6-f2af1d8d0842","resourceVersion":"390","creationTimestamp":"2024-03-18T12:18:50Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0318 12:19:04.575197    2644 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"2483ecbb-9d42-44fc-b1b6-f2af1d8d0842","resourceVersion":"390","creationTimestamp":"2024-03-18T12:18:50Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0318 12:19:04.575197    2644 round_trippers.go:463] PUT https://172.25.151.112:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0318 12:19:04.575197    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:04.575197    2644 round_trippers.go:473]     Content-Type: application/json
	I0318 12:19:04.575197    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:04.575197    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:04.577169    2644 round_trippers.go:574] Response Status: 200 OK in 29 milliseconds
	I0318 12:19:04.577169    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:04.577169    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:04.577169    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:04.577169    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:04.577169    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:04.577169    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:04 GMT
	I0318 12:19:04.577169    2644 round_trippers.go:580]     Audit-Id: c43504c8-931d-4102-b82c-e9c34fff0984
	I0318 12:19:04.577169    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"348","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0318 12:19:04.602163    2644 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0318 12:19:04.602163    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:04.602163    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:04 GMT
	I0318 12:19:04.602523    2644 round_trippers.go:580]     Audit-Id: c8045038-67a0-4996-a963-0519e640a275
	I0318 12:19:04.602523    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:04.602523    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:04.602523    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:04.602523    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:04.602523    2644 round_trippers.go:580]     Content-Length: 291
	I0318 12:19:04.602523    2644 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"2483ecbb-9d42-44fc-b1b6-f2af1d8d0842","resourceVersion":"393","creationTimestamp":"2024-03-18T12:18:50Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0318 12:19:05.053071    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:19:05.053314    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:05.053314    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:05.053071    2644 round_trippers.go:463] GET https://172.25.151.112:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0318 12:19:05.053314    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:05.053314    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:05.053426    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:05.053426    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:05.064211    2644 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0318 12:19:05.064211    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:05.064642    2644 round_trippers.go:580]     Audit-Id: 650bcc47-77fb-4dea-85b8-ac609f6fa26b
	I0318 12:19:05.064642    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:05.064642    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:05.064642    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:05.064642    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:05.064642    2644 round_trippers.go:580]     Content-Length: 291
	I0318 12:19:05.064642    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:05 GMT
	I0318 12:19:05.064761    2644 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"2483ecbb-9d42-44fc-b1b6-f2af1d8d0842","resourceVersion":"406","creationTimestamp":"2024-03-18T12:18:50Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0318 12:19:05.064902    2644 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0318 12:19:05.064902    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:05.064993    2644 round_trippers.go:580]     Audit-Id: 8c3e79c4-dc3d-48f8-b250-a15a76f89a91
	I0318 12:19:05.064993    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:05.064993    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:05.064993    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:05.064993    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:05.064993    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:05 GMT
	I0318 12:19:05.064993    2644 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-642600" context rescaled to 1 replicas
	I0318 12:19:05.065239    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"348","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0318 12:19:05.561678    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:19:05.561678    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:05.561678    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:05.561678    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:05.566298    2644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:19:05.566298    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:05.566298    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:05 GMT
	I0318 12:19:05.566298    2644 round_trippers.go:580]     Audit-Id: bd730fc9-4727-4842-b5fb-0fe6195faa3c
	I0318 12:19:05.566298    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:05.566298    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:05.566695    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:05.566774    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:05.566851    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"348","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0318 12:19:05.754078    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:19:05.754148    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:19:05.759134    2644 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 12:19:05.754478    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:19:05.759134    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:19:05.760309    2644 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0318 12:19:05.761929    2644 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 12:19:05.761929    2644 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 12:19:05.762469    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:19:05.763185    2644 kapi.go:59] client config for multinode-642600: &rest.Config{Host:"https://172.25.151.112:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-642600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-642600\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x226b2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 12:19:05.763624    2644 addons.go:234] Setting addon default-storageclass=true in "multinode-642600"
	I0318 12:19:05.763624    2644 host.go:66] Checking if "multinode-642600" exists ...
	I0318 12:19:05.764429    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:19:06.053883    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:19:06.053883    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:06.053883    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:06.053883    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:06.058364    2644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:19:06.058364    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:06.058364    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:06.058364    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:06.059255    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:06.059415    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:06.059415    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:06 GMT
	I0318 12:19:06.059415    2644 round_trippers.go:580]     Audit-Id: 4b8ac330-db8c-45e7-a14b-9a39bbce5e3c
	I0318 12:19:06.059653    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"348","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0318 12:19:06.562962    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:19:06.563029    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:06.563106    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:06.563106    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:06.588319    2644 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I0318 12:19:06.588319    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:06.588319    2644 round_trippers.go:580]     Audit-Id: beadeff3-3968-4eda-ab95-260464703094
	I0318 12:19:06.588319    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:06.588319    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:06.588319    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:06.589209    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:06.589209    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:06 GMT
	I0318 12:19:06.589898    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"348","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0318 12:19:06.590355    2644 node_ready.go:53] node "multinode-642600" has status "Ready":"False"
	I0318 12:19:07.051082    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:19:07.051168    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:07.051168    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:07.051168    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:07.055560    2644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:19:07.055560    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:07.056126    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:07.056126    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:07.056126    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:07.056126    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:07 GMT
	I0318 12:19:07.056126    2644 round_trippers.go:580]     Audit-Id: 2f02101d-9e5a-45c6-bff4-cf49c455795d
	I0318 12:19:07.056126    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:07.056797    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"348","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0318 12:19:07.558184    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:19:07.558184    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:07.558184    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:07.558184    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:07.563200    2644 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:19:07.563394    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:07.563394    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:07.563394    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:07.563394    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:07.563394    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:07 GMT
	I0318 12:19:07.563394    2644 round_trippers.go:580]     Audit-Id: ae0c4816-bc9b-4f30-b3ff-5ef1a9c0307c
	I0318 12:19:07.563394    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:07.563820    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"348","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0318 12:19:08.048011    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:19:08.048184    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:08.048184    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:08.048184    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:08.051812    2644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:19:08.051812    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:08.051812    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:08 GMT
	I0318 12:19:08.051812    2644 round_trippers.go:580]     Audit-Id: 99dabab3-b78b-47e3-8bc3-dfa315a3ad3d
	I0318 12:19:08.051812    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:08.051812    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:08.051812    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:08.051812    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:08.051812    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"348","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0318 12:19:08.167821    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:19:08.167821    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:19:08.167821    2644 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 12:19:08.167821    2644 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 12:19:08.167821    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:19:08.241138    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:19:08.241214    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:19:08.241285    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:19:08.553952    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:19:08.553952    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:08.553952    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:08.553952    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:08.558760    2644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:19:08.559711    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:08.559711    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:08.559763    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:08.559763    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:08.559763    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:08.559763    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:08 GMT
	I0318 12:19:08.559763    2644 round_trippers.go:580]     Audit-Id: 50620c3c-8b00-449c-9450-f0e70c99434b
	I0318 12:19:08.560122    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"348","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0318 12:19:09.061142    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:19:09.061201    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:09.061260    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:09.061260    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:09.064865    2644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:19:09.065623    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:09.065623    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:09.065623    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:09.065623    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:09.065623    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:09 GMT
	I0318 12:19:09.065623    2644 round_trippers.go:580]     Audit-Id: 3b0bfa3b-5216-4985-9cfb-89a6f8772dcc
	I0318 12:19:09.065623    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:09.066030    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"348","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0318 12:19:09.066490    2644 node_ready.go:53] node "multinode-642600" has status "Ready":"False"
	I0318 12:19:09.553312    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:19:09.553312    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:09.553549    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:09.553549    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:09.557950    2644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:19:09.558031    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:09.558123    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:09.558123    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:09 GMT
	I0318 12:19:09.558123    2644 round_trippers.go:580]     Audit-Id: 5f9dad59-9478-4444-b9c1-c0584b7f8b8b
	I0318 12:19:09.558123    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:09.558123    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:09.558123    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:09.558679    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"348","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0318 12:19:10.061946    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:19:10.062065    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:10.062258    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:10.062317    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:10.068829    2644 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:19:10.069372    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:10.069372    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:10.069372    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:10.069372    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:10.069372    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:10 GMT
	I0318 12:19:10.069372    2644 round_trippers.go:580]     Audit-Id: e0b49b66-0e63-48d5-9635-5a894b23d55b
	I0318 12:19:10.069452    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:10.069793    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"348","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0318 12:19:10.515779    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:19:10.515779    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:19:10.515964    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:19:10.552840    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:19:10.552840    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:10.552840    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:10.552840    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:10.565142    2644 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0318 12:19:10.566171    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:10.566171    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:10 GMT
	I0318 12:19:10.566171    2644 round_trippers.go:580]     Audit-Id: 1c0e72e2-9798-4d67-80c1-11b06c400a53
	I0318 12:19:10.566171    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:10.566171    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:10.566171    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:10.566171    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:10.566171    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"348","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0318 12:19:11.059643    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:19:11.060130    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:11.060130    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:11.060130    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:11.071792    2644 main.go:141] libmachine: [stdout =====>] : 172.25.151.112
	
	I0318 12:19:11.071792    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:19:11.072503    2644 sshutil.go:53] new ssh client: &{IP:172.25.151.112 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-642600\id_rsa Username:docker}
	I0318 12:19:11.097216    2644 round_trippers.go:574] Response Status: 200 OK in 37 milliseconds
	I0318 12:19:11.097311    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:11.097311    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:11.097311    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:11.097486    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:11 GMT
	I0318 12:19:11.097486    2644 round_trippers.go:580]     Audit-Id: 6e281be9-4688-495d-a7ca-2611a904344f
	I0318 12:19:11.097486    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:11.097486    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:11.097766    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"348","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0318 12:19:11.098282    2644 node_ready.go:53] node "multinode-642600" has status "Ready":"False"
	I0318 12:19:11.259130    2644 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 12:19:11.548925    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:19:11.548997    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:11.548997    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:11.548997    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:11.552754    2644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:19:11.553694    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:11.553694    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:11.553694    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:11.553694    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:11 GMT
	I0318 12:19:11.553694    2644 round_trippers.go:580]     Audit-Id: 85bff678-8693-4597-b021-eb19534677a4
	I0318 12:19:11.553778    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:11.553778    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:11.553998    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"348","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0318 12:19:12.053995    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:19:12.054163    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:12.054163    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:12.054163    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:12.074964    2644 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0318 12:19:12.074964    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:12.074964    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:12.074964    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:12.074964    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:12.074964    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:12.074964    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:12 GMT
	I0318 12:19:12.074964    2644 round_trippers.go:580]     Audit-Id: 203eedc1-ba6d-4d07-b237-febe852f20a8
	I0318 12:19:12.074964    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"348","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0318 12:19:12.212341    2644 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0318 12:19:12.212341    2644 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0318 12:19:12.212496    2644 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0318 12:19:12.212496    2644 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0318 12:19:12.212496    2644 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0318 12:19:12.212496    2644 command_runner.go:130] > pod/storage-provisioner created
	I0318 12:19:12.564835    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:19:12.565012    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:12.565012    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:12.565012    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:12.570221    2644 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:19:12.570490    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:12.570490    2644 round_trippers.go:580]     Audit-Id: f5a2e3ec-74bb-42fa-9528-7b3cb12ddd17
	I0318 12:19:12.570490    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:12.570490    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:12.570490    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:12.570490    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:12.570490    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:12 GMT
	I0318 12:19:12.570745    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"348","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0318 12:19:13.055426    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:19:13.055426    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:13.055426    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:13.055771    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:13.059872    2644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:19:13.060726    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:13.060726    2644 round_trippers.go:580]     Audit-Id: f1c60259-1d36-415e-ac8e-6e2dfa6bc96e
	I0318 12:19:13.060726    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:13.060726    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:13.060726    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:13.060726    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:13.060726    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:13 GMT
	I0318 12:19:13.060726    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"348","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0318 12:19:13.290567    2644 main.go:141] libmachine: [stdout =====>] : 172.25.151.112
	
	I0318 12:19:13.290839    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:19:13.291253    2644 sshutil.go:53] new ssh client: &{IP:172.25.151.112 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-642600\id_rsa Username:docker}
	I0318 12:19:13.439700    2644 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 12:19:13.562487    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:19:13.562556    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:13.562556    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:13.562556    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:13.567329    2644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:19:13.567329    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:13.567329    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:13.567329    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:13 GMT
	I0318 12:19:13.567329    2644 round_trippers.go:580]     Audit-Id: 9713955a-8e81-4375-9b48-8405d2d130c7
	I0318 12:19:13.567329    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:13.567329    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:13.567329    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:13.567329    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"348","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0318 12:19:13.568277    2644 node_ready.go:53] node "multinode-642600" has status "Ready":"False"
	I0318 12:19:13.735274    2644 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0318 12:19:13.735952    2644 round_trippers.go:463] GET https://172.25.151.112:8443/apis/storage.k8s.io/v1/storageclasses
	I0318 12:19:13.735952    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:13.735952    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:13.735952    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:13.740527    2644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:19:13.740527    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:13.740527    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:13.740527    2644 round_trippers.go:580]     Content-Length: 1273
	I0318 12:19:13.740527    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:13 GMT
	I0318 12:19:13.740527    2644 round_trippers.go:580]     Audit-Id: 32cf312b-c8d3-48b9-857a-d6d86d05730a
	I0318 12:19:13.740527    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:13.740527    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:13.740527    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:13.740527    2644 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"431"},"items":[{"metadata":{"name":"standard","uid":"2401fa02-4be1-43ff-9851-3bd46c330d83","resourceVersion":"431","creationTimestamp":"2024-03-18T12:19:13Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-03-18T12:19:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0318 12:19:13.740527    2644 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"2401fa02-4be1-43ff-9851-3bd46c330d83","resourceVersion":"431","creationTimestamp":"2024-03-18T12:19:13Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-03-18T12:19:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0318 12:19:13.740527    2644 round_trippers.go:463] PUT https://172.25.151.112:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0318 12:19:13.740527    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:13.740527    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:13.740527    2644 round_trippers.go:473]     Content-Type: application/json
	I0318 12:19:13.740527    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:13.745523    2644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:19:13.745523    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:13.745523    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:13.745523    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:13.745523    2644 round_trippers.go:580]     Content-Length: 1220
	I0318 12:19:13.745523    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:13 GMT
	I0318 12:19:13.745523    2644 round_trippers.go:580]     Audit-Id: c683c0f1-89d2-4bae-848a-bf0d46cc37ab
	I0318 12:19:13.745523    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:13.745523    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:13.745523    2644 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"2401fa02-4be1-43ff-9851-3bd46c330d83","resourceVersion":"431","creationTimestamp":"2024-03-18T12:19:13Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-03-18T12:19:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0318 12:19:13.749536    2644 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0318 12:19:13.754522    2644 addons.go:505] duration metric: took 10.4803097s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0318 12:19:14.055064    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:19:14.055064    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:14.055064    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:14.055064    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:14.059569    2644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:19:14.059569    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:14.059569    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:14.059569    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:14 GMT
	I0318 12:19:14.059569    2644 round_trippers.go:580]     Audit-Id: 4db1fcb0-bd18-4ef3-bd23-8440a7cd3e2d
	I0318 12:19:14.059569    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:14.059569    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:14.059569    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:14.059919    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"348","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0318 12:19:14.556291    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:19:14.556291    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:14.556291    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:14.556291    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:14.560620    2644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:19:14.560620    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:14.560620    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:14.560620    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:14.560620    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:14 GMT
	I0318 12:19:14.561067    2644 round_trippers.go:580]     Audit-Id: d04360cd-c9e4-4eab-b77a-0f2cf201ebfc
	I0318 12:19:14.561067    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:14.561067    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:14.561405    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"348","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0318 12:19:15.057931    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:19:15.057931    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:15.058179    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:15.058179    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:15.062562    2644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:19:15.062562    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:15.062562    2644 round_trippers.go:580]     Audit-Id: c781a8ce-d341-4224-82b8-f781ff98fcc2
	I0318 12:19:15.062562    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:15.062562    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:15.062562    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:15.062562    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:15.062562    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:15 GMT
	I0318 12:19:15.063821    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"348","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0318 12:19:15.555512    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:19:15.555599    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:15.555599    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:15.555599    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:15.559999    2644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:19:15.559999    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:15.559999    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:15.559999    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:15 GMT
	I0318 12:19:15.559999    2644 round_trippers.go:580]     Audit-Id: f5c2ffc2-edc0-4156-8833-f4393b77820c
	I0318 12:19:15.559999    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:15.560486    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:15.560486    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:15.560713    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"348","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0318 12:19:16.059565    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:19:16.059565    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:16.059565    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:16.059565    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:16.065163    2644 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:19:16.065890    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:16.065890    2644 round_trippers.go:580]     Audit-Id: 005869d3-5208-4748-8eb0-2d58eb6c38c1
	I0318 12:19:16.065890    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:16.065890    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:16.065890    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:16.065890    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:16.065890    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:16 GMT
	I0318 12:19:16.066474    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"434","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0318 12:19:16.066923    2644 node_ready.go:49] node "multinode-642600" has status "Ready":"True"
	I0318 12:19:16.066981    2644 node_ready.go:38] duration metric: took 11.5196059s for node "multinode-642600" to be "Ready" ...
	I0318 12:19:16.067039    2644 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 12:19:16.067152    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/namespaces/kube-system/pods
	I0318 12:19:16.067182    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:16.067212    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:16.067212    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:16.083245    2644 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0318 12:19:16.083245    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:16.083245    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:16 GMT
	I0318 12:19:16.083245    2644 round_trippers.go:580]     Audit-Id: e317a55d-2fec-4dfa-97f7-0bbb78778398
	I0318 12:19:16.083245    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:16.083849    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:16.083849    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:16.083849    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:16.085738    2644 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"440"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"438","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54012 chars]
	I0318 12:19:16.090238    2644 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fgn7v" in "kube-system" namespace to be "Ready" ...
	I0318 12:19:16.090238    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:19:16.090238    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:16.090238    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:16.090238    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:16.095226    2644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:19:16.095226    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:16.095226    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:16.095226    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:16.096017    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:16 GMT
	I0318 12:19:16.096017    2644 round_trippers.go:580]     Audit-Id: 7526e644-8fa6-4ada-94d4-398259c7aae4
	I0318 12:19:16.096017    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:16.096017    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:16.096017    2644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"438","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0318 12:19:16.097014    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:19:16.097014    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:16.097096    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:16.097096    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:16.103232    2644 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:19:16.103232    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:16.103232    2644 round_trippers.go:580]     Audit-Id: 5f52c7a4-222f-4b38-8a6b-cdb33a4203df
	I0318 12:19:16.103232    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:16.103232    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:16.103232    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:16.103232    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:16.103232    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:16 GMT
	I0318 12:19:16.106258    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"434","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0318 12:19:16.599312    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:19:16.599376    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:16.599376    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:16.599376    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:16.604883    2644 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:19:16.604883    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:16.604883    2644 round_trippers.go:580]     Audit-Id: 2c832c9b-10bd-4f96-b0ee-72e4114034aa
	I0318 12:19:16.604883    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:16.604883    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:16.604883    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:16.604883    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:16.604883    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:16 GMT
	I0318 12:19:16.604883    2644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"438","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0318 12:19:16.605942    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:19:16.605942    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:16.605942    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:16.605942    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:16.611196    2644 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:19:16.611196    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:16.611196    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:16.611196    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:16.611196    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:16.611196    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:16.611196    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:16 GMT
	I0318 12:19:16.611196    2644 round_trippers.go:580]     Audit-Id: 1d46aac2-b593-45de-aae6-f4fed3506465
	I0318 12:19:16.611196    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"434","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0318 12:19:17.106195    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:19:17.106229    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:17.106275    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:17.106275    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:17.115547    2644 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0318 12:19:17.115547    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:17.115547    2644 round_trippers.go:580]     Audit-Id: c0f871de-6070-4aad-aa3f-0c3e96fc97f8
	I0318 12:19:17.115547    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:17.115547    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:17.115547    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:17.115547    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:17.115547    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:17 GMT
	I0318 12:19:17.115547    2644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"438","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0318 12:19:17.116329    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:19:17.116329    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:17.116886    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:17.116886    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:17.121625    2644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:19:17.121625    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:17.121625    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:17.121625    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:17.121625    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:17 GMT
	I0318 12:19:17.121625    2644 round_trippers.go:580]     Audit-Id: a18e2484-069f-419d-bc95-240d2c60453c
	I0318 12:19:17.121625    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:17.121625    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:17.122533    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"434","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0318 12:19:17.594333    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:19:17.594333    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:17.594477    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:17.594477    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:17.599180    2644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:19:17.599180    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:17.600115    2644 round_trippers.go:580]     Audit-Id: 379f7f3d-588d-4472-b4ad-9f777254db57
	I0318 12:19:17.600115    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:17.600115    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:17.600115    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:17.600115    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:17.600115    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:17 GMT
	I0318 12:19:17.600400    2644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"438","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0318 12:19:17.601296    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:19:17.601468    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:17.601468    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:17.601468    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:17.605671    2644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:19:17.605671    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:17.605671    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:17.605671    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:17.605671    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:17 GMT
	I0318 12:19:17.605671    2644 round_trippers.go:580]     Audit-Id: b67d4d0b-7e42-42c2-854e-85d3284abd78
	I0318 12:19:17.605671    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:17.605671    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:17.606904    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"434","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0318 12:19:18.096474    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:19:18.096474    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:18.096474    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:18.096560    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:18.100943    2644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:19:18.101399    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:18.101399    2644 round_trippers.go:580]     Audit-Id: 433ec720-364a-4438-be8b-ad41e4b0a195
	I0318 12:19:18.101399    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:18.101399    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:18.101399    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:18.101399    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:18.101399    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:18 GMT
	I0318 12:19:18.101557    2644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"438","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0318 12:19:18.102392    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:19:18.102392    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:18.102475    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:18.102475    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:18.109214    2644 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:19:18.109214    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:18.109214    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:18 GMT
	I0318 12:19:18.109214    2644 round_trippers.go:580]     Audit-Id: 56ef24b5-5897-47be-84cb-761704899a12
	I0318 12:19:18.109521    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:18.109521    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:18.109521    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:18.109521    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:18.109902    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"434","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0318 12:19:18.110283    2644 pod_ready.go:102] pod "coredns-5dd5756b68-fgn7v" in "kube-system" namespace has status "Ready":"False"
	I0318 12:19:18.604527    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:19:18.604527    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:18.604609    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:18.604609    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:18.610814    2644 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:19:18.610814    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:18.610814    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:18.610814    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:18 GMT
	I0318 12:19:18.610814    2644 round_trippers.go:580]     Audit-Id: 9af6d769-4417-417c-ac25-2f19b8c1b6b0
	I0318 12:19:18.610814    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:18.610814    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:18.610814    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:18.611784    2644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"438","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0318 12:19:18.611784    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:19:18.611784    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:18.611784    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:18.611784    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:18.616783    2644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:19:18.616965    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:18.616965    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:18.616965    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:18.616965    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:18.616965    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:18.616965    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:18 GMT
	I0318 12:19:18.616965    2644 round_trippers.go:580]     Audit-Id: 7d952453-27a8-4456-ba42-baf2c0c3c66d
	I0318 12:19:18.617175    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"434","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0318 12:19:19.093053    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:19:19.093247    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:19.093247    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:19.093247    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:19.097983    2644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:19:19.097983    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:19.097983    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:19.097983    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:19 GMT
	I0318 12:19:19.097983    2644 round_trippers.go:580]     Audit-Id: 5f6116f6-0639-4305-a7f7-fa3da4b93abe
	I0318 12:19:19.097983    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:19.097983    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:19.097983    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:19.099114    2644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"438","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0318 12:19:19.099346    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:19:19.099346    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:19.099346    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:19.099346    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:19.103013    2644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:19:19.103013    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:19.103013    2644 round_trippers.go:580]     Audit-Id: 2820ab18-7cea-44e2-ba7a-ba1a690eee18
	I0318 12:19:19.103013    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:19.103013    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:19.103013    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:19.103013    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:19.103013    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:19 GMT
	I0318 12:19:19.103919    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"434","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0318 12:19:19.596207    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:19:19.596207    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:19.596207    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:19.596207    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:19.601710    2644 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:19:19.601710    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:19.601710    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:19.601710    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:19 GMT
	I0318 12:19:19.601710    2644 round_trippers.go:580]     Audit-Id: c3231ec6-2e02-4f48-8678-d4d74a19c422
	I0318 12:19:19.601710    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:19.601710    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:19.601710    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:19.601710    2644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"457","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6284 chars]
	I0318 12:19:19.603845    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:19:19.603845    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:19.603845    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:19.603845    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:19.607577    2644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:19:19.607577    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:19.607577    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:19 GMT
	I0318 12:19:19.607577    2644 round_trippers.go:580]     Audit-Id: da835b12-e7ae-44c6-aec4-a7c9e1ce98d5
	I0318 12:19:19.607577    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:19.607577    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:19.607577    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:19.607577    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:19.608588    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"434","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0318 12:19:19.609007    2644 pod_ready.go:92] pod "coredns-5dd5756b68-fgn7v" in "kube-system" namespace has status "Ready":"True"
	I0318 12:19:19.609007    2644 pod_ready.go:81] duration metric: took 3.5187476s for pod "coredns-5dd5756b68-fgn7v" in "kube-system" namespace to be "Ready" ...
	I0318 12:19:19.609007    2644 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-642600" in "kube-system" namespace to be "Ready" ...
	I0318 12:19:19.609007    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-642600
	I0318 12:19:19.609007    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:19.609007    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:19.609007    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:19.612030    2644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:19:19.612030    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:19.612030    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:19.612030    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:19.612030    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:19.612030    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:19 GMT
	I0318 12:19:19.612030    2644 round_trippers.go:580]     Audit-Id: bc380bbd-f0b4-40f8-9449-5c41834a8a38
	I0318 12:19:19.612030    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:19.612783    2644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-642600","namespace":"kube-system","uid":"237133d7-6f1a-42ee-8cf2-a2d7564d67fc","resourceVersion":"418","creationTimestamp":"2024-03-18T12:18:51Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.151.112:2379","kubernetes.io/config.hash":"ec96a596e22f5afedbd92a854d1b8bec","kubernetes.io/config.mirror":"ec96a596e22f5afedbd92a854d1b8bec","kubernetes.io/config.seen":"2024-03-18T12:18:50.896439006Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:18:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5872 chars]
	I0318 12:19:19.613130    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:19:19.613130    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:19.613130    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:19.613130    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:19.615803    2644 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:19:19.615803    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:19.615803    2644 round_trippers.go:580]     Audit-Id: 26c59159-4bd7-4813-8c36-338fc431c8c0
	I0318 12:19:19.615803    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:19.615803    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:19.615803    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:19.616845    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:19.616845    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:19 GMT
	I0318 12:19:19.616845    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"434","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0318 12:19:19.616845    2644 pod_ready.go:92] pod "etcd-multinode-642600" in "kube-system" namespace has status "Ready":"True"
	I0318 12:19:19.616845    2644 pod_ready.go:81] duration metric: took 7.8379ms for pod "etcd-multinode-642600" in "kube-system" namespace to be "Ready" ...
	I0318 12:19:19.616845    2644 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-642600" in "kube-system" namespace to be "Ready" ...
	I0318 12:19:19.616845    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-642600
	I0318 12:19:19.616845    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:19.616845    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:19.616845    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:19.620911    2644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:19:19.620911    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:19.620911    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:19.620911    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:19.620911    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:19.620911    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:19 GMT
	I0318 12:19:19.620911    2644 round_trippers.go:580]     Audit-Id: 5ebf8619-0e79-4e06-9c25-2a544130318f
	I0318 12:19:19.620911    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:19.621525    2644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-642600","namespace":"kube-system","uid":"4aa98cb9-f6ab-40b3-8c15-235ba4e09985","resourceVersion":"417","creationTimestamp":"2024-03-18T12:18:51Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.25.151.112:8443","kubernetes.io/config.hash":"d04d3e415061983b742e6c14f1a5f562","kubernetes.io/config.mirror":"d04d3e415061983b742e6c14f1a5f562","kubernetes.io/config.seen":"2024-03-18T12:18:50.896431006Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:18:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7408 chars]
	I0318 12:19:19.622113    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:19:19.622113    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:19.622162    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:19.622162    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:19.624414    2644 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:19:19.624414    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:19.624414    2644 round_trippers.go:580]     Audit-Id: 61bc5427-c67d-4cc6-9d9d-9789bd4faab4
	I0318 12:19:19.624414    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:19.624414    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:19.624414    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:19.624414    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:19.624414    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:19 GMT
	I0318 12:19:19.624414    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"434","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0318 12:19:19.625433    2644 pod_ready.go:92] pod "kube-apiserver-multinode-642600" in "kube-system" namespace has status "Ready":"True"
	I0318 12:19:19.625433    2644 pod_ready.go:81] duration metric: took 8.5877ms for pod "kube-apiserver-multinode-642600" in "kube-system" namespace to be "Ready" ...
	I0318 12:19:19.625433    2644 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-642600" in "kube-system" namespace to be "Ready" ...
	I0318 12:19:19.625433    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-642600
	I0318 12:19:19.625433    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:19.625433    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:19.625433    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:19.629271    2644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:19:19.629271    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:19.629271    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:19.629271    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:19.629271    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:19.629271    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:19 GMT
	I0318 12:19:19.629271    2644 round_trippers.go:580]     Audit-Id: bbd4c94b-8efd-4a6b-b11a-33e411e58813
	I0318 12:19:19.629271    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:19.629994    2644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-642600","namespace":"kube-system","uid":"1dd2a576-c5a0-44e5-b194-545e8b18962c","resourceVersion":"415","creationTimestamp":"2024-03-18T12:18:51Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a1608bc774d0b3e96e1b6fbbded5cb52","kubernetes.io/config.mirror":"a1608bc774d0b3e96e1b6fbbded5cb52","kubernetes.io/config.seen":"2024-03-18T12:18:50.896437006Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:18:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6973 chars]
	I0318 12:19:19.630537    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:19:19.630596    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:19.630596    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:19.630596    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:19.632808    2644 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:19:19.632808    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:19.632808    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:19.632808    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:19.632808    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:19 GMT
	I0318 12:19:19.632808    2644 round_trippers.go:580]     Audit-Id: 113d8b20-a9cd-4f62-a0e2-e97d42866aca
	I0318 12:19:19.632808    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:19.632808    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:19.633686    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"434","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0318 12:19:19.633686    2644 pod_ready.go:92] pod "kube-controller-manager-multinode-642600" in "kube-system" namespace has status "Ready":"True"
	I0318 12:19:19.633686    2644 pod_ready.go:81] duration metric: took 8.2525ms for pod "kube-controller-manager-multinode-642600" in "kube-system" namespace to be "Ready" ...
	I0318 12:19:19.633686    2644 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4dg79" in "kube-system" namespace to be "Ready" ...
	I0318 12:19:19.633686    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4dg79
	I0318 12:19:19.634241    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:19.634241    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:19.634241    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:19.637555    2644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:19:19.637555    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:19.637555    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:19 GMT
	I0318 12:19:19.637555    2644 round_trippers.go:580]     Audit-Id: 617bee1d-ab71-470c-bb47-2973ae725db0
	I0318 12:19:19.637555    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:19.637555    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:19.637555    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:19.637555    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:19.638398    2644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4dg79","generateName":"kube-proxy-","namespace":"kube-system","uid":"449242c2-ad12-4da5-b339-3be7ab8a9b16","resourceVersion":"409","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"158ddb85-85d3-4864-bdec-d4555b6c7434","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"158ddb85-85d3-4864-bdec-d4555b6c7434\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0318 12:19:19.639216    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:19:19.639216    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:19.639550    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:19.639550    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:19.643038    2644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:19:19.643038    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:19.643038    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:19.643038    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:19.643038    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:19.643038    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:19.643038    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:19 GMT
	I0318 12:19:19.643038    2644 round_trippers.go:580]     Audit-Id: 25c61937-4b84-4e28-86c6-a37dc78a57f2
	I0318 12:19:19.643038    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"434","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0318 12:19:19.643694    2644 pod_ready.go:92] pod "kube-proxy-4dg79" in "kube-system" namespace has status "Ready":"True"
	I0318 12:19:19.643694    2644 pod_ready.go:81] duration metric: took 10.0083ms for pod "kube-proxy-4dg79" in "kube-system" namespace to be "Ready" ...
	I0318 12:19:19.643694    2644 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-642600" in "kube-system" namespace to be "Ready" ...
	I0318 12:19:19.800673    2644 request.go:629] Waited for 156.3729ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.151.112:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-642600
	I0318 12:19:19.800730    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-642600
	I0318 12:19:19.800730    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:19.800730    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:19.800730    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:19.805354    2644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:19:19.805354    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:19.805639    2644 round_trippers.go:580]     Audit-Id: af87b056-0735-475c-8e38-7425eefd7a6f
	I0318 12:19:19.805702    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:19.805702    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:19.805702    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:19.805702    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:19.805702    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:19 GMT
	I0318 12:19:19.805702    2644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-642600","namespace":"kube-system","uid":"52e29d3b-d6e9-4109-916d-74123a2ab190","resourceVersion":"414","creationTimestamp":"2024-03-18T12:18:51Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cf50844b540be8ed0b3e767db413ac8f","kubernetes.io/config.mirror":"cf50844b540be8ed0b3e767db413ac8f","kubernetes.io/config.seen":"2024-03-18T12:18:50.896438106Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:18:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4703 chars]
	I0318 12:19:20.003791    2644 request.go:629] Waited for 197.2906ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:19:20.004061    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:19:20.004061    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:20.004126    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:20.004126    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:20.007912    2644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:19:20.007912    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:20.007912    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:20.007912    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:20 GMT
	I0318 12:19:20.007912    2644 round_trippers.go:580]     Audit-Id: 5f9de53b-3747-4982-bb52-b7f1b8564292
	I0318 12:19:20.008054    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:20.008054    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:20.008054    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:20.008141    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"434","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0318 12:19:20.008141    2644 pod_ready.go:92] pod "kube-scheduler-multinode-642600" in "kube-system" namespace has status "Ready":"True"
	I0318 12:19:20.008141    2644 pod_ready.go:81] duration metric: took 364.4449ms for pod "kube-scheduler-multinode-642600" in "kube-system" namespace to be "Ready" ...
	I0318 12:19:20.008141    2644 pod_ready.go:38] duration metric: took 3.941078s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 12:19:20.008762    2644 api_server.go:52] waiting for apiserver process to appear ...
	I0318 12:19:20.021314    2644 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 12:19:20.054051    2644 command_runner.go:130] > 2144
	I0318 12:19:20.054435    2644 api_server.go:72] duration metric: took 16.7801434s to wait for apiserver process to appear ...
	I0318 12:19:20.054435    2644 api_server.go:88] waiting for apiserver healthz status ...
	I0318 12:19:20.054615    2644 api_server.go:253] Checking apiserver healthz at https://172.25.151.112:8443/healthz ...
	I0318 12:19:20.065868    2644 api_server.go:279] https://172.25.151.112:8443/healthz returned 200:
	ok
	I0318 12:19:20.066871    2644 round_trippers.go:463] GET https://172.25.151.112:8443/version
	I0318 12:19:20.066944    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:20.067000    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:20.067000    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:20.068316    2644 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0318 12:19:20.068316    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:20.069275    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:20.069275    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:20.069309    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:20.069309    2644 round_trippers.go:580]     Content-Length: 264
	I0318 12:19:20.069309    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:20 GMT
	I0318 12:19:20.069309    2644 round_trippers.go:580]     Audit-Id: 3320cd3d-cd5b-4487-bb38-4bfd2abd1195
	I0318 12:19:20.069309    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:20.069309    2644 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0318 12:19:20.069431    2644 api_server.go:141] control plane version: v1.28.4
	I0318 12:19:20.069431    2644 api_server.go:131] duration metric: took 14.9957ms to wait for apiserver health ...
	I0318 12:19:20.069431    2644 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 12:19:20.207283    2644 request.go:629] Waited for 137.7299ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.151.112:8443/api/v1/namespaces/kube-system/pods
	I0318 12:19:20.207522    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/namespaces/kube-system/pods
	I0318 12:19:20.207522    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:20.207522    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:20.207522    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:20.213956    2644 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:19:20.214035    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:20.214035    2644 round_trippers.go:580]     Audit-Id: 2674396e-a528-4dad-84ea-e5fcfd8503a9
	I0318 12:19:20.214035    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:20.214035    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:20.214035    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:20.214035    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:20.214035    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:20 GMT
	I0318 12:19:20.215961    2644 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"461"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"457","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54128 chars]
	I0318 12:19:20.218925    2644 system_pods.go:59] 8 kube-system pods found
	I0318 12:19:20.219045    2644 system_pods.go:61] "coredns-5dd5756b68-fgn7v" [7bc52797-b4bd-4046-b3d5-fae9c8ccd13b] Running
	I0318 12:19:20.219045    2644 system_pods.go:61] "etcd-multinode-642600" [237133d7-6f1a-42ee-8cf2-a2d7564d67fc] Running
	I0318 12:19:20.219045    2644 system_pods.go:61] "kindnet-kpt4f" [acd9d7a0-0e27-4bbb-8562-6fbf374742ca] Running
	I0318 12:19:20.219045    2644 system_pods.go:61] "kube-apiserver-multinode-642600" [4aa98cb9-f6ab-40b3-8c15-235ba4e09985] Running
	I0318 12:19:20.219045    2644 system_pods.go:61] "kube-controller-manager-multinode-642600" [1dd2a576-c5a0-44e5-b194-545e8b18962c] Running
	I0318 12:19:20.219045    2644 system_pods.go:61] "kube-proxy-4dg79" [449242c2-ad12-4da5-b339-3be7ab8a9b16] Running
	I0318 12:19:20.219045    2644 system_pods.go:61] "kube-scheduler-multinode-642600" [52e29d3b-d6e9-4109-916d-74123a2ab190] Running
	I0318 12:19:20.219045    2644 system_pods.go:61] "storage-provisioner" [d2718b8a-26a9-4c86-bf9a-221d1ee23ceb] Running
	I0318 12:19:20.219045    2644 system_pods.go:74] duration metric: took 149.6125ms to wait for pod list to return data ...
	I0318 12:19:20.219166    2644 default_sa.go:34] waiting for default service account to be created ...
	I0318 12:19:20.410417    2644 request.go:629] Waited for 190.6478ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.151.112:8443/api/v1/namespaces/default/serviceaccounts
	I0318 12:19:20.410526    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/namespaces/default/serviceaccounts
	I0318 12:19:20.410526    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:20.410526    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:20.410526    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:20.414833    2644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:19:20.414833    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:20.414833    2644 round_trippers.go:580]     Content-Length: 261
	I0318 12:19:20.414833    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:20 GMT
	I0318 12:19:20.414833    2644 round_trippers.go:580]     Audit-Id: 5a84e68b-2ab6-4a4e-985c-1093c85687a7
	I0318 12:19:20.414833    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:20.414833    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:20.414833    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:20.415207    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:20.415207    2644 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"461"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"cb0307d5-001e-4a17-89ea-7a5b4f2963cc","resourceVersion":"344","creationTimestamp":"2024-03-18T12:19:02Z"}}]}
	I0318 12:19:20.415356    2644 default_sa.go:45] found service account: "default"
	I0318 12:19:20.415356    2644 default_sa.go:55] duration metric: took 196.1884ms for default service account to be created ...
	I0318 12:19:20.415356    2644 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 12:19:20.596790    2644 request.go:629] Waited for 181.4333ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.151.112:8443/api/v1/namespaces/kube-system/pods
	I0318 12:19:20.596790    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/namespaces/kube-system/pods
	I0318 12:19:20.596790    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:20.596790    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:20.596790    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:20.602427    2644 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:19:20.602427    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:20.602427    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:20.602427    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:20.602427    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:20.602427    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:20.602586    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:20 GMT
	I0318 12:19:20.602586    2644 round_trippers.go:580]     Audit-Id: cd0b0a7c-0c0e-46a5-b459-d6b715f1cb6a
	I0318 12:19:20.604258    2644 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"461"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"457","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54128 chars]
	I0318 12:19:20.606512    2644 system_pods.go:86] 8 kube-system pods found
	I0318 12:19:20.607044    2644 system_pods.go:89] "coredns-5dd5756b68-fgn7v" [7bc52797-b4bd-4046-b3d5-fae9c8ccd13b] Running
	I0318 12:19:20.607298    2644 system_pods.go:89] "etcd-multinode-642600" [237133d7-6f1a-42ee-8cf2-a2d7564d67fc] Running
	I0318 12:19:20.607298    2644 system_pods.go:89] "kindnet-kpt4f" [acd9d7a0-0e27-4bbb-8562-6fbf374742ca] Running
	I0318 12:19:20.607298    2644 system_pods.go:89] "kube-apiserver-multinode-642600" [4aa98cb9-f6ab-40b3-8c15-235ba4e09985] Running
	I0318 12:19:20.607298    2644 system_pods.go:89] "kube-controller-manager-multinode-642600" [1dd2a576-c5a0-44e5-b194-545e8b18962c] Running
	I0318 12:19:20.607298    2644 system_pods.go:89] "kube-proxy-4dg79" [449242c2-ad12-4da5-b339-3be7ab8a9b16] Running
	I0318 12:19:20.607298    2644 system_pods.go:89] "kube-scheduler-multinode-642600" [52e29d3b-d6e9-4109-916d-74123a2ab190] Running
	I0318 12:19:20.607398    2644 system_pods.go:89] "storage-provisioner" [d2718b8a-26a9-4c86-bf9a-221d1ee23ceb] Running
	I0318 12:19:20.607459    2644 system_pods.go:126] duration metric: took 192.0768ms to wait for k8s-apps to be running ...
	I0318 12:19:20.607459    2644 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 12:19:20.620217    2644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:19:20.650446    2644 system_svc.go:56] duration metric: took 42.0649ms WaitForService to wait for kubelet
	I0318 12:19:20.650446    2644 kubeadm.go:576] duration metric: took 17.3761914s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 12:19:20.650516    2644 node_conditions.go:102] verifying NodePressure condition ...
	I0318 12:19:20.800767    2644 request.go:629] Waited for 150.1259ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.151.112:8443/api/v1/nodes
	I0318 12:19:20.801181    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes
	I0318 12:19:20.801309    2644 round_trippers.go:469] Request Headers:
	I0318 12:19:20.801366    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:19:20.801366    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:19:20.805092    2644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:19:20.805092    2644 round_trippers.go:577] Response Headers:
	I0318 12:19:20.805546    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:19:20.805546    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:19:20.805546    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:19:20.805546    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:19:20 GMT
	I0318 12:19:20.805546    2644 round_trippers.go:580]     Audit-Id: 34837499-8086-43df-985c-588abb94f451
	I0318 12:19:20.805546    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:19:20.805778    2644 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"461"},"items":[{"metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"434","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4836 chars]
	I0318 12:19:20.809262    2644 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 12:19:20.809341    2644 node_conditions.go:123] node cpu capacity is 2
	I0318 12:19:20.809341    2644 node_conditions.go:105] duration metric: took 158.8239ms to run NodePressure ...
	I0318 12:19:20.809341    2644 start.go:240] waiting for startup goroutines ...
	I0318 12:19:20.809432    2644 start.go:245] waiting for cluster config update ...
	I0318 12:19:20.809475    2644 start.go:254] writing updated cluster config ...
	I0318 12:19:20.815830    2644 out.go:177] 
	I0318 12:19:20.818946    2644 config.go:182] Loaded profile config "ha-606900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 12:19:20.823000    2644 config.go:182] Loaded profile config "multinode-642600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 12:19:20.823000    2644 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\config.json ...
	I0318 12:19:20.831160    2644 out.go:177] * Starting "multinode-642600-m02" worker node in "multinode-642600" cluster
	I0318 12:19:20.833790    2644 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 12:19:20.834366    2644 cache.go:56] Caching tarball of preloaded images
	I0318 12:19:20.834691    2644 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0318 12:19:20.834691    2644 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 12:19:20.834691    2644 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\config.json ...
	I0318 12:19:20.837110    2644 start.go:360] acquireMachinesLock for multinode-642600-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 12:19:20.837110    2644 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-642600-m02"
	I0318 12:19:20.837110    2644 start.go:93] Provisioning new machine with config: &{Name:multinode-642600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.28.4 ClusterName:multinode-642600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.151.112 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0318 12:19:20.837110    2644 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0318 12:19:20.844829    2644 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 12:19:20.845182    2644 start.go:159] libmachine.API.Create for "multinode-642600" (driver="hyperv")
	I0318 12:19:20.845232    2644 client.go:168] LocalClient.Create starting
	I0318 12:19:20.845407    2644 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0318 12:19:20.845407    2644 main.go:141] libmachine: Decoding PEM data...
	I0318 12:19:20.845407    2644 main.go:141] libmachine: Parsing certificate...
	I0318 12:19:20.846107    2644 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0318 12:19:20.846161    2644 main.go:141] libmachine: Decoding PEM data...
	I0318 12:19:20.846161    2644 main.go:141] libmachine: Parsing certificate...
	I0318 12:19:20.846161    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0318 12:19:22.872205    2644 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0318 12:19:22.872393    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:19:22.872393    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0318 12:19:24.707904    2644 main.go:141] libmachine: [stdout =====>] : False
	
	I0318 12:19:24.708983    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:19:24.709070    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0318 12:19:26.296736    2644 main.go:141] libmachine: [stdout =====>] : True
	
	I0318 12:19:26.297011    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:19:26.297130    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0318 12:19:30.219760    2644 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0318 12:19:30.219760    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:19:30.224471    2644 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0318 12:19:30.800200    2644 main.go:141] libmachine: Creating SSH key...
	I0318 12:19:31.057218    2644 main.go:141] libmachine: Creating VM...
	I0318 12:19:31.057218    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0318 12:19:34.144324    2644 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0318 12:19:34.144324    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:19:34.145425    2644 main.go:141] libmachine: Using switch "Default Switch"
	I0318 12:19:34.145425    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0318 12:19:36.009475    2644 main.go:141] libmachine: [stdout =====>] : True
	
	I0318 12:19:36.009475    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:19:36.009475    2644 main.go:141] libmachine: Creating VHD
	I0318 12:19:36.009475    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-642600-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0318 12:19:39.923122    2644 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-642600-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 83899DD0-8ABA-4A04-A661-7763F64DB059
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0318 12:19:39.923122    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:19:39.923122    2644 main.go:141] libmachine: Writing magic tar header
	I0318 12:19:39.923122    2644 main.go:141] libmachine: Writing SSH key tar header
	I0318 12:19:39.933609    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-642600-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-642600-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0318 12:19:43.192359    2644 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:19:43.192359    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:19:43.193212    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-642600-m02\disk.vhd' -SizeBytes 20000MB
	I0318 12:19:45.831943    2644 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:19:45.832643    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:19:45.832643    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-642600-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-642600-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0318 12:19:49.656782    2644 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-642600-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0318 12:19:49.656847    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:19:49.656847    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-642600-m02 -DynamicMemoryEnabled $false
	I0318 12:19:51.999368    2644 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:19:51.999368    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:19:51.999751    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-642600-m02 -Count 2
	I0318 12:19:54.275892    2644 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:19:54.276771    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:19:54.276771    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-642600-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-642600-m02\boot2docker.iso'
	I0318 12:19:56.964172    2644 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:19:56.964172    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:19:56.964278    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-642600-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-642600-m02\disk.vhd'
	I0318 12:19:59.694078    2644 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:19:59.694078    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:19:59.694078    2644 main.go:141] libmachine: Starting VM...
	I0318 12:19:59.694389    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-642600-m02
	I0318 12:20:02.828821    2644 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:20:02.829228    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:20:02.829228    2644 main.go:141] libmachine: Waiting for host to start...
	I0318 12:20:02.829361    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600-m02 ).state
	I0318 12:20:05.212166    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:20:05.212397    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:20:05.212460    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:20:07.877929    2644 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:20:07.877929    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:20:08.885577    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600-m02 ).state
	I0318 12:20:11.169723    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:20:11.169723    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:20:11.169723    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:20:13.847516    2644 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:20:13.847516    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:20:14.859996    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600-m02 ).state
	I0318 12:20:17.172527    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:20:17.173158    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:20:17.173158    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:20:19.805701    2644 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:20:19.806772    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:20:20.813426    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600-m02 ).state
	I0318 12:20:23.122036    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:20:23.122036    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:20:23.122036    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:20:25.839714    2644 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:20:25.840620    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:20:26.852971    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600-m02 ).state
	I0318 12:20:29.183466    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:20:29.183507    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:20:29.183581    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:20:31.901404    2644 main.go:141] libmachine: [stdout =====>] : 172.25.159.102
	
	I0318 12:20:31.902359    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:20:31.902359    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600-m02 ).state
	I0318 12:20:34.085801    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:20:34.085801    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:20:34.085975    2644 machine.go:94] provisionDockerMachine start ...
	I0318 12:20:34.086091    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600-m02 ).state
	I0318 12:20:36.355807    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:20:36.355807    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:20:36.355992    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:20:39.064711    2644 main.go:141] libmachine: [stdout =====>] : 172.25.159.102
	
	I0318 12:20:39.064984    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:20:39.070348    2644 main.go:141] libmachine: Using SSH client type: native
	I0318 12:20:39.082038    2644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.159.102 22 <nil> <nil>}
	I0318 12:20:39.082074    2644 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 12:20:39.220739    2644 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 12:20:39.220739    2644 buildroot.go:166] provisioning hostname "multinode-642600-m02"
	I0318 12:20:39.221279    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600-m02 ).state
	I0318 12:20:41.473217    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:20:41.473289    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:20:41.473289    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:20:44.196878    2644 main.go:141] libmachine: [stdout =====>] : 172.25.159.102
	
	I0318 12:20:44.197782    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:20:44.203955    2644 main.go:141] libmachine: Using SSH client type: native
	I0318 12:20:44.203955    2644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.159.102 22 <nil> <nil>}
	I0318 12:20:44.203955    2644 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-642600-m02 && echo "multinode-642600-m02" | sudo tee /etc/hostname
	I0318 12:20:44.372676    2644 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-642600-m02
	
	I0318 12:20:44.372676    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600-m02 ).state
	I0318 12:20:46.650784    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:20:46.651440    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:20:46.651440    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:20:49.380103    2644 main.go:141] libmachine: [stdout =====>] : 172.25.159.102
	
	I0318 12:20:49.380103    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:20:49.386331    2644 main.go:141] libmachine: Using SSH client type: native
	I0318 12:20:49.386331    2644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.159.102 22 <nil> <nil>}
	I0318 12:20:49.386331    2644 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-642600-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-642600-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-642600-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 12:20:49.527997    2644 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 12:20:49.527997    2644 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0318 12:20:49.527997    2644 buildroot.go:174] setting up certificates
	I0318 12:20:49.527997    2644 provision.go:84] configureAuth start
	I0318 12:20:49.527997    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600-m02 ).state
	I0318 12:20:51.770410    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:20:51.770410    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:20:51.771312    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:20:54.508864    2644 main.go:141] libmachine: [stdout =====>] : 172.25.159.102
	
	I0318 12:20:54.509100    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:20:54.509220    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600-m02 ).state
	I0318 12:20:56.769317    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:20:56.769317    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:20:56.769724    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:20:59.449616    2644 main.go:141] libmachine: [stdout =====>] : 172.25.159.102
	
	I0318 12:20:59.449654    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:20:59.449654    2644 provision.go:143] copyHostCerts
	I0318 12:20:59.449844    2644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0318 12:20:59.450101    2644 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0318 12:20:59.450182    2644 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0318 12:20:59.450677    2644 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0318 12:20:59.451863    2644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0318 12:20:59.452092    2644 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0318 12:20:59.452166    2644 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0318 12:20:59.452570    2644 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0318 12:20:59.453531    2644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0318 12:20:59.453782    2644 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0318 12:20:59.453871    2644 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0318 12:20:59.454186    2644 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0318 12:20:59.455234    2644 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-642600-m02 san=[127.0.0.1 172.25.159.102 localhost minikube multinode-642600-m02]
	I0318 12:20:59.877901    2644 provision.go:177] copyRemoteCerts
	I0318 12:20:59.890584    2644 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 12:20:59.890584    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600-m02 ).state
	I0318 12:21:02.152038    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:21:02.152038    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:21:02.152038    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:21:04.872834    2644 main.go:141] libmachine: [stdout =====>] : 172.25.159.102
	
	I0318 12:21:04.872834    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:21:04.873679    2644 sshutil.go:53] new ssh client: &{IP:172.25.159.102 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-642600-m02\id_rsa Username:docker}
	I0318 12:21:04.981669    2644 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0910534s)
	I0318 12:21:04.981794    2644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0318 12:21:04.982355    2644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 12:21:05.032716    2644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0318 12:21:05.033344    2644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 12:21:05.087843    2644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0318 12:21:05.087843    2644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0318 12:21:05.146951    2644 provision.go:87] duration metric: took 15.6188587s to configureAuth
	I0318 12:21:05.146951    2644 buildroot.go:189] setting minikube options for container-runtime
	I0318 12:21:05.146951    2644 config.go:182] Loaded profile config "multinode-642600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 12:21:05.146951    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600-m02 ).state
	I0318 12:21:07.430242    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:21:07.430242    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:21:07.431114    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:21:10.186467    2644 main.go:141] libmachine: [stdout =====>] : 172.25.159.102
	
	I0318 12:21:10.186818    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:21:10.195669    2644 main.go:141] libmachine: Using SSH client type: native
	I0318 12:21:10.195669    2644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.159.102 22 <nil> <nil>}
	I0318 12:21:10.195669    2644 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0318 12:21:10.326472    2644 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0318 12:21:10.326472    2644 buildroot.go:70] root file system type: tmpfs
	I0318 12:21:10.326813    2644 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0318 12:21:10.326813    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600-m02 ).state
	I0318 12:21:12.556560    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:21:12.556560    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:21:12.556728    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:21:15.254501    2644 main.go:141] libmachine: [stdout =====>] : 172.25.159.102
	
	I0318 12:21:15.254501    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:21:15.260516    2644 main.go:141] libmachine: Using SSH client type: native
	I0318 12:21:15.260610    2644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.159.102 22 <nil> <nil>}
	I0318 12:21:15.260610    2644 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.25.151.112"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0318 12:21:15.412250    2644 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.25.151.112
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0318 12:21:15.412414    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600-m02 ).state
	I0318 12:21:17.659436    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:21:17.659436    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:21:17.659436    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:21:20.439425    2644 main.go:141] libmachine: [stdout =====>] : 172.25.159.102
	
	I0318 12:21:20.439425    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:21:20.447826    2644 main.go:141] libmachine: Using SSH client type: native
	I0318 12:21:20.448346    2644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.159.102 22 <nil> <nil>}
	I0318 12:21:20.448346    2644 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0318 12:21:22.627679    2644 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0318 12:21:22.627755    2644 machine.go:97] duration metric: took 48.5414847s to provisionDockerMachine
	I0318 12:21:22.627755    2644 client.go:171] duration metric: took 2m1.7817216s to LocalClient.Create
	I0318 12:21:22.627842    2644 start.go:167] duration metric: took 2m1.7819167s to libmachine.API.Create "multinode-642600"
	I0318 12:21:22.627896    2644 start.go:293] postStartSetup for "multinode-642600-m02" (driver="hyperv")
	I0318 12:21:22.627896    2644 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 12:21:22.640700    2644 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 12:21:22.640700    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600-m02 ).state
	I0318 12:21:24.950517    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:21:24.950517    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:21:24.950878    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:21:27.749274    2644 main.go:141] libmachine: [stdout =====>] : 172.25.159.102
	
	I0318 12:21:27.749274    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:21:27.749627    2644 sshutil.go:53] new ssh client: &{IP:172.25.159.102 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-642600-m02\id_rsa Username:docker}
	I0318 12:21:27.856035    2644 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.2153036s)
	I0318 12:21:27.868830    2644 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 12:21:27.875708    2644 command_runner.go:130] > NAME=Buildroot
	I0318 12:21:27.875943    2644 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0318 12:21:27.875943    2644 command_runner.go:130] > ID=buildroot
	I0318 12:21:27.875943    2644 command_runner.go:130] > VERSION_ID=2023.02.9
	I0318 12:21:27.875943    2644 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0318 12:21:27.876130    2644 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 12:21:27.876130    2644 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0318 12:21:27.876130    2644 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0318 12:21:27.877335    2644 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem -> 91202.pem in /etc/ssl/certs
	I0318 12:21:27.877335    2644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem -> /etc/ssl/certs/91202.pem
	I0318 12:21:27.890210    2644 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 12:21:27.908594    2644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem --> /etc/ssl/certs/91202.pem (1708 bytes)
	I0318 12:21:27.959005    2644 start.go:296] duration metric: took 5.3310765s for postStartSetup
	I0318 12:21:27.962050    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600-m02 ).state
	I0318 12:21:30.212430    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:21:30.212514    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:21:30.212576    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:21:32.878832    2644 main.go:141] libmachine: [stdout =====>] : 172.25.159.102
	
	I0318 12:21:32.879647    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:21:32.879962    2644 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\config.json ...
	I0318 12:21:32.882610    2644 start.go:128] duration metric: took 2m12.0446106s to createHost
	I0318 12:21:32.882610    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600-m02 ).state
	I0318 12:21:35.133622    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:21:35.134178    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:21:35.134246    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:21:37.853781    2644 main.go:141] libmachine: [stdout =====>] : 172.25.159.102
	
	I0318 12:21:37.853781    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:21:37.860042    2644 main.go:141] libmachine: Using SSH client type: native
	I0318 12:21:37.860205    2644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.159.102 22 <nil> <nil>}
	I0318 12:21:37.860205    2644 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 12:21:37.985520    2644 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710764497.983387488
	
	I0318 12:21:37.985629    2644 fix.go:216] guest clock: 1710764497.983387488
	I0318 12:21:37.985629    2644 fix.go:229] Guest: 2024-03-18 12:21:37.983387488 +0000 UTC Remote: 2024-03-18 12:21:32.8826108 +0000 UTC m=+359.664911501 (delta=5.100776688s)
	I0318 12:21:37.985712    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600-m02 ).state
	I0318 12:21:40.243380    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:21:40.243380    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:21:40.243380    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:21:42.920436    2644 main.go:141] libmachine: [stdout =====>] : 172.25.159.102
	
	I0318 12:21:42.921140    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:21:42.926800    2644 main.go:141] libmachine: Using SSH client type: native
	I0318 12:21:42.927763    2644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.159.102 22 <nil> <nil>}
	I0318 12:21:42.927839    2644 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1710764497
	I0318 12:21:43.065899    2644 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Mar 18 12:21:37 UTC 2024
	
	I0318 12:21:43.066009    2644 fix.go:236] clock set: Mon Mar 18 12:21:37 UTC 2024
	 (err=<nil>)
	I0318 12:21:43.066009    2644 start.go:83] releasing machines lock for "multinode-642600-m02", held for 2m22.2280313s
	I0318 12:21:43.066009    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600-m02 ).state
	I0318 12:21:45.371063    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:21:45.371262    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:21:45.371313    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:21:48.082599    2644 main.go:141] libmachine: [stdout =====>] : 172.25.159.102
	
	I0318 12:21:48.082599    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:21:48.085846    2644 out.go:177] * Found network options:
	I0318 12:21:48.088634    2644 out.go:177]   - NO_PROXY=172.25.151.112
	W0318 12:21:48.090941    2644 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 12:21:48.098050    2644 out.go:177]   - NO_PROXY=172.25.151.112
	W0318 12:21:48.100307    2644 proxy.go:119] fail to check proxy env: Error ip not in block
	W0318 12:21:48.101301    2644 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 12:21:48.103320    2644 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 12:21:48.104321    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600-m02 ).state
	I0318 12:21:48.114406    2644 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0318 12:21:48.114406    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600-m02 ).state
	I0318 12:21:50.450140    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:21:50.450261    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:21:50.450261    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:21:50.462693    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:21:50.462693    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:21:50.462693    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:21:53.291280    2644 main.go:141] libmachine: [stdout =====>] : 172.25.159.102
	
	I0318 12:21:53.292310    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:21:53.293056    2644 sshutil.go:53] new ssh client: &{IP:172.25.159.102 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-642600-m02\id_rsa Username:docker}
	I0318 12:21:53.319329    2644 main.go:141] libmachine: [stdout =====>] : 172.25.159.102
	
	I0318 12:21:53.319329    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:21:53.319329    2644 sshutil.go:53] new ssh client: &{IP:172.25.159.102 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-642600-m02\id_rsa Username:docker}
	I0318 12:21:53.548909    2644 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0318 12:21:53.548996    2644 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0318 12:21:53.549082    2644 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.434557s)
	I0318 12:21:53.549152    2644 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.4457598s)
	W0318 12:21:53.549152    2644 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 12:21:53.562925    2644 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 12:21:53.596407    2644 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0318 12:21:53.596407    2644 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 12:21:53.596407    2644 start.go:494] detecting cgroup driver to use...
	I0318 12:21:53.596407    2644 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 12:21:53.633224    2644 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0318 12:21:53.646393    2644 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0318 12:21:53.680151    2644 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0318 12:21:53.702461    2644 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0318 12:21:53.714382    2644 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0318 12:21:53.749421    2644 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 12:21:53.782849    2644 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0318 12:21:53.814450    2644 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 12:21:53.846921    2644 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 12:21:53.879622    2644 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0318 12:21:53.914090    2644 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 12:21:53.935469    2644 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0318 12:21:53.948708    2644 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 12:21:53.980263    2644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:21:54.189134    2644 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0318 12:21:54.223477    2644 start.go:494] detecting cgroup driver to use...
	I0318 12:21:54.236984    2644 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0318 12:21:54.264809    2644 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0318 12:21:54.264910    2644 command_runner.go:130] > [Unit]
	I0318 12:21:54.264910    2644 command_runner.go:130] > Description=Docker Application Container Engine
	I0318 12:21:54.264910    2644 command_runner.go:130] > Documentation=https://docs.docker.com
	I0318 12:21:54.264968    2644 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0318 12:21:54.264968    2644 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0318 12:21:54.264968    2644 command_runner.go:130] > StartLimitBurst=3
	I0318 12:21:54.264968    2644 command_runner.go:130] > StartLimitIntervalSec=60
	I0318 12:21:54.264968    2644 command_runner.go:130] > [Service]
	I0318 12:21:54.265026    2644 command_runner.go:130] > Type=notify
	I0318 12:21:54.265026    2644 command_runner.go:130] > Restart=on-failure
	I0318 12:21:54.265026    2644 command_runner.go:130] > Environment=NO_PROXY=172.25.151.112
	I0318 12:21:54.265077    2644 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0318 12:21:54.265077    2644 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0318 12:21:54.265077    2644 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0318 12:21:54.265077    2644 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0318 12:21:54.265077    2644 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0318 12:21:54.265077    2644 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0318 12:21:54.265077    2644 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0318 12:21:54.265077    2644 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0318 12:21:54.265077    2644 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0318 12:21:54.265077    2644 command_runner.go:130] > ExecStart=
	I0318 12:21:54.265077    2644 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0318 12:21:54.265077    2644 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0318 12:21:54.265077    2644 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0318 12:21:54.265077    2644 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0318 12:21:54.265077    2644 command_runner.go:130] > LimitNOFILE=infinity
	I0318 12:21:54.265077    2644 command_runner.go:130] > LimitNPROC=infinity
	I0318 12:21:54.265077    2644 command_runner.go:130] > LimitCORE=infinity
	I0318 12:21:54.265077    2644 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0318 12:21:54.265077    2644 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0318 12:21:54.265077    2644 command_runner.go:130] > TasksMax=infinity
	I0318 12:21:54.265077    2644 command_runner.go:130] > TimeoutStartSec=0
	I0318 12:21:54.265077    2644 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0318 12:21:54.265077    2644 command_runner.go:130] > Delegate=yes
	I0318 12:21:54.265077    2644 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0318 12:21:54.265077    2644 command_runner.go:130] > KillMode=process
	I0318 12:21:54.265077    2644 command_runner.go:130] > [Install]
	I0318 12:21:54.265077    2644 command_runner.go:130] > WantedBy=multi-user.target
	I0318 12:21:54.278234    2644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 12:21:54.318854    2644 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 12:21:54.365512    2644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 12:21:54.407844    2644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 12:21:54.446177    2644 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0318 12:21:54.512679    2644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 12:21:54.543571    2644 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 12:21:54.580592    2644 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0318 12:21:54.596236    2644 ssh_runner.go:195] Run: which cri-dockerd
	I0318 12:21:54.602713    2644 command_runner.go:130] > /usr/bin/cri-dockerd
	I0318 12:21:54.615508    2644 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0318 12:21:54.635539    2644 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0318 12:21:54.682123    2644 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0318 12:21:54.911232    2644 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0318 12:21:55.129000    2644 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0318 12:21:55.129135    2644 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0318 12:21:55.185995    2644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:21:55.401482    2644 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 12:21:57.967228    2644 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5647269s)
	I0318 12:21:57.980065    2644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0318 12:21:58.018753    2644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 12:21:58.058704    2644 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0318 12:21:58.277683    2644 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0318 12:21:58.504558    2644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:21:58.727196    2644 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0318 12:21:58.770611    2644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 12:21:58.812571    2644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:21:59.056436    2644 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0318 12:21:59.169369    2644 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0318 12:21:59.183193    2644 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0318 12:21:59.196086    2644 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0318 12:21:59.196086    2644 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0318 12:21:59.196186    2644 command_runner.go:130] > Device: 0,22	Inode: 880         Links: 1
	I0318 12:21:59.196186    2644 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0318 12:21:59.196186    2644 command_runner.go:130] > Access: 2024-03-18 12:21:59.080778629 +0000
	I0318 12:21:59.196259    2644 command_runner.go:130] > Modify: 2024-03-18 12:21:59.080778629 +0000
	I0318 12:21:59.196259    2644 command_runner.go:130] > Change: 2024-03-18 12:21:59.084778641 +0000
	I0318 12:21:59.196259    2644 command_runner.go:130] >  Birth: -
	I0318 12:21:59.196341    2644 start.go:562] Will wait 60s for crictl version
	I0318 12:21:59.206561    2644 ssh_runner.go:195] Run: which crictl
	I0318 12:21:59.214146    2644 command_runner.go:130] > /usr/bin/crictl
	I0318 12:21:59.227985    2644 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 12:21:59.307289    2644 command_runner.go:130] > Version:  0.1.0
	I0318 12:21:59.307353    2644 command_runner.go:130] > RuntimeName:  docker
	I0318 12:21:59.307353    2644 command_runner.go:130] > RuntimeVersion:  25.0.4
	I0318 12:21:59.307353    2644 command_runner.go:130] > RuntimeApiVersion:  v1
	I0318 12:21:59.307415    2644 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.4
	RuntimeApiVersion:  v1
	I0318 12:21:59.318687    2644 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 12:21:59.354686    2644 command_runner.go:130] > 25.0.4
	I0318 12:21:59.365475    2644 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 12:21:59.397993    2644 command_runner.go:130] > 25.0.4
	I0318 12:21:59.402177    2644 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	I0318 12:21:59.404586    2644 out.go:177]   - env NO_PROXY=172.25.151.112
	I0318 12:21:59.406941    2644 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0318 12:21:59.411115    2644 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0318 12:21:59.411115    2644 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0318 12:21:59.411115    2644 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0318 12:21:59.411115    2644 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:ae:0d:2c Flags:up|broadcast|multicast|running}
	I0318 12:21:59.414044    2644 ip.go:210] interface addr: fe80::f8a6:d6b6:cc4:1ba0/64
	I0318 12:21:59.414044    2644 ip.go:210] interface addr: 172.25.144.1/20
	I0318 12:21:59.424978    2644 ssh_runner.go:195] Run: grep 172.25.144.1	host.minikube.internal$ /etc/hosts
	I0318 12:21:59.431650    2644 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 12:21:59.453986    2644 mustload.go:65] Loading cluster: multinode-642600
	I0318 12:21:59.453986    2644 config.go:182] Loaded profile config "multinode-642600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 12:21:59.455133    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:22:01.653188    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:22:01.653893    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:22:01.653893    2644 host.go:66] Checking if "multinode-642600" exists ...
	I0318 12:22:01.654284    2644 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600 for IP: 172.25.159.102
	I0318 12:22:01.654284    2644 certs.go:194] generating shared ca certs ...
	I0318 12:22:01.654284    2644 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:22:01.655293    2644 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0318 12:22:01.655683    2644 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0318 12:22:01.655761    2644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 12:22:01.656178    2644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0318 12:22:01.656473    2644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 12:22:01.656586    2644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 12:22:01.657547    2644 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9120.pem (1338 bytes)
	W0318 12:22:01.657970    2644 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9120_empty.pem, impossibly tiny 0 bytes
	I0318 12:22:01.657970    2644 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0318 12:22:01.657970    2644 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0318 12:22:01.658812    2644 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0318 12:22:01.659116    2644 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0318 12:22:01.659764    2644 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem (1708 bytes)
	I0318 12:22:01.660076    2644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9120.pem -> /usr/share/ca-certificates/9120.pem
	I0318 12:22:01.660101    2644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem -> /usr/share/ca-certificates/91202.pem
	I0318 12:22:01.660101    2644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:22:01.660691    2644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 12:22:01.710930    2644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 12:22:01.758024    2644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 12:22:01.807564    2644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 12:22:01.860272    2644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9120.pem --> /usr/share/ca-certificates/9120.pem (1338 bytes)
	I0318 12:22:01.910952    2644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem --> /usr/share/ca-certificates/91202.pem (1708 bytes)
	I0318 12:22:01.960928    2644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 12:22:02.025202    2644 ssh_runner.go:195] Run: openssl version
	I0318 12:22:02.034213    2644 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0318 12:22:02.048272    2644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9120.pem && ln -fs /usr/share/ca-certificates/9120.pem /etc/ssl/certs/9120.pem"
	I0318 12:22:02.081737    2644 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9120.pem
	I0318 12:22:02.088901    2644 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 18 10:53 /usr/share/ca-certificates/9120.pem
	I0318 12:22:02.089667    2644 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 10:53 /usr/share/ca-certificates/9120.pem
	I0318 12:22:02.102009    2644 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9120.pem
	I0318 12:22:02.111304    2644 command_runner.go:130] > 51391683
	I0318 12:22:02.124833    2644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9120.pem /etc/ssl/certs/51391683.0"
	I0318 12:22:02.161609    2644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/91202.pem && ln -fs /usr/share/ca-certificates/91202.pem /etc/ssl/certs/91202.pem"
	I0318 12:22:02.195873    2644 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91202.pem
	I0318 12:22:02.204045    2644 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 18 10:53 /usr/share/ca-certificates/91202.pem
	I0318 12:22:02.204126    2644 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 10:53 /usr/share/ca-certificates/91202.pem
	I0318 12:22:02.215796    2644 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91202.pem
	I0318 12:22:02.228786    2644 command_runner.go:130] > 3ec20f2e
	I0318 12:22:02.243565    2644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/91202.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 12:22:02.278338    2644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 12:22:02.308769    2644 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:22:02.315805    2644 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 18 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:22:02.316147    2644 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:22:02.327769    2644 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:22:02.336671    2644 command_runner.go:130] > b5213941
	I0318 12:22:02.349704    2644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 12:22:02.384334    2644 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 12:22:02.392135    2644 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 12:22:02.392285    2644 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 12:22:02.392598    2644 kubeadm.go:928] updating node {m02 172.25.159.102 8443 v1.28.4 docker false true} ...
	I0318 12:22:02.392847    2644 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-642600-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.159.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-642600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 12:22:02.407014    2644 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 12:22:02.425242    2644 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	I0318 12:22:02.425798    2644 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0318 12:22:02.439857    2644 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0318 12:22:02.457807    2644 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0318 12:22:02.457807    2644 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256
	I0318 12:22:02.457807    2644 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256
	I0318 12:22:02.457807    2644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0318 12:22:02.457807    2644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0318 12:22:02.475266    2644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:22:02.475503    2644 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0318 12:22:02.478992    2644 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0318 12:22:02.499561    2644 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0318 12:22:02.499684    2644 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0318 12:22:02.499561    2644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0318 12:22:02.499855    2644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0318 12:22:02.499855    2644 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0318 12:22:02.499950    2644 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0318 12:22:02.500023    2644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0318 12:22:02.522430    2644 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0318 12:22:02.574214    2644 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0318 12:22:02.578997    2644 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0318 12:22:02.578997    2644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0318 12:22:03.903638    2644 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0318 12:22:03.923746    2644 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0318 12:22:03.960586    2644 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 12:22:04.012552    2644 ssh_runner.go:195] Run: grep 172.25.151.112	control-plane.minikube.internal$ /etc/hosts
	I0318 12:22:04.019408    2644 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.151.112	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 12:22:04.058086    2644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:22:04.279072    2644 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 12:22:04.309179    2644 host.go:66] Checking if "multinode-642600" exists ...
	I0318 12:22:04.318524    2644 start.go:316] joinCluster: &{Name:multinode-642600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:multinode-642600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.151.112 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.159.102 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 12:22:04.318524    2644 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0318 12:22:04.318524    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:22:06.589692    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:22:06.589692    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:22:06.590491    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:22:09.257038    2644 main.go:141] libmachine: [stdout =====>] : 172.25.151.112
	
	I0318 12:22:09.257038    2644 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:22:09.258131    2644 sshutil.go:53] new ssh client: &{IP:172.25.151.112 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-642600\id_rsa Username:docker}
	I0318 12:22:09.466364    2644 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 021clp.0urdkcoy0oh51dgl --discovery-token-ca-cert-hash sha256:1315b336657f971045d436062c4002c5bfe51c3e72afc075449943f75abc0cef 
	I0318 12:22:09.466364    2644 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (5.1478086s)
	I0318 12:22:09.466364    2644 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.25.159.102 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0318 12:22:09.467036    2644 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 021clp.0urdkcoy0oh51dgl --discovery-token-ca-cert-hash sha256:1315b336657f971045d436062c4002c5bfe51c3e72afc075449943f75abc0cef --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-642600-m02"
	I0318 12:22:09.756977    2644 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 12:22:13.110528    2644 command_runner.go:130] > [preflight] Running pre-flight checks
	I0318 12:22:13.110528    2644 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0318 12:22:13.110528    2644 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0318 12:22:13.110528    2644 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 12:22:13.110528    2644 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 12:22:13.110528    2644 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0318 12:22:13.110528    2644 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0318 12:22:13.110528    2644 command_runner.go:130] > This node has joined the cluster:
	I0318 12:22:13.110528    2644 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0318 12:22:13.110528    2644 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0318 12:22:13.110528    2644 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0318 12:22:13.110528    2644 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 021clp.0urdkcoy0oh51dgl --discovery-token-ca-cert-hash sha256:1315b336657f971045d436062c4002c5bfe51c3e72afc075449943f75abc0cef --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-642600-m02": (3.6434703s)
	I0318 12:22:13.110528    2644 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0318 12:22:13.389679    2644 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0318 12:22:13.660954    2644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-642600-m02 minikube.k8s.io/updated_at=2024_03_18T12_22_13_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd minikube.k8s.io/name=multinode-642600 minikube.k8s.io/primary=false
	I0318 12:22:13.799952    2644 command_runner.go:130] > node/multinode-642600-m02 labeled
	I0318 12:22:13.800143    2644 start.go:318] duration metric: took 9.4815607s to joinCluster
	I0318 12:22:13.800284    2644 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.25.159.102 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0318 12:22:13.802972    2644 out.go:177] * Verifying Kubernetes components...
	I0318 12:22:13.800909    2644 config.go:182] Loaded profile config "multinode-642600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 12:22:13.819216    2644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:22:14.091257    2644 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 12:22:14.126810    2644 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0318 12:22:14.127105    2644 kapi.go:59] client config for multinode-642600: &rest.Config{Host:"https://172.25.151.112:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-642600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-642600\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x226b2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 12:22:14.127834    2644 node_ready.go:35] waiting up to 6m0s for node "multinode-642600-m02" to be "Ready" ...
	I0318 12:22:14.128413    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:22:14.128413    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:14.128413    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:14.128413    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:14.142698    2644 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0318 12:22:14.142698    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:14.142698    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:14.142698    2644 round_trippers.go:580]     Content-Length: 4044
	I0318 12:22:14.142698    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:14 GMT
	I0318 12:22:14.142698    2644 round_trippers.go:580]     Audit-Id: a52b5db4-adef-4118-be0e-1ed5edfecefb
	I0318 12:22:14.142698    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:14.142698    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:14.142698    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:14.143471    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"627","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3020 chars]
	I0318 12:22:14.639446    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:22:14.639712    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:14.639712    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:14.639712    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:14.643089    2644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:22:14.643089    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:14.643089    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:14.643089    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:14.643089    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:14.643089    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:14.643089    2644 round_trippers.go:580]     Content-Length: 4044
	I0318 12:22:14.643089    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:14 GMT
	I0318 12:22:14.643089    2644 round_trippers.go:580]     Audit-Id: 6876735d-0874-4c92-bc39-44bb281891d5
	I0318 12:22:14.643974    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"627","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3020 chars]
	I0318 12:22:15.143956    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:22:15.143956    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:15.144262    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:15.144262    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:15.148610    2644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:22:15.148653    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:15.148653    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:15.148653    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:15.148653    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:15.148653    2644 round_trippers.go:580]     Content-Length: 4044
	I0318 12:22:15.148737    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:15 GMT
	I0318 12:22:15.148737    2644 round_trippers.go:580]     Audit-Id: c320428c-efa3-47db-9a99-b6225b45e87b
	I0318 12:22:15.148737    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:15.148737    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"627","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3020 chars]
	I0318 12:22:15.642733    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:22:15.642733    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:15.642733    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:15.642733    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:15.646380    2644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:22:15.646380    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:15.646380    2644 round_trippers.go:580]     Content-Length: 4044
	I0318 12:22:15.646380    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:15 GMT
	I0318 12:22:15.646380    2644 round_trippers.go:580]     Audit-Id: cfb45c57-3655-4f96-a395-7788f114d0d0
	I0318 12:22:15.646380    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:15.646380    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:15.646380    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:15.647030    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:15.647263    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"627","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3020 chars]
	I0318 12:22:16.132497    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:22:16.132497    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:16.132497    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:16.132497    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:16.139135    2644 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:22:16.139135    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:16.140155    2644 round_trippers.go:580]     Audit-Id: 3d278a2d-740b-44ab-8597-260171007f86
	I0318 12:22:16.140155    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:16.140155    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:16.140155    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:16.140155    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:16.140155    2644 round_trippers.go:580]     Content-Length: 4044
	I0318 12:22:16.140207    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:16 GMT
	I0318 12:22:16.140484    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"627","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3020 chars]
	I0318 12:22:16.140803    2644 node_ready.go:53] node "multinode-642600-m02" has status "Ready":"False"
	I0318 12:22:16.636660    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:22:16.636660    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:16.636660    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:16.636660    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:16.642290    2644 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:22:16.642388    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:16.642388    2644 round_trippers.go:580]     Content-Length: 4044
	I0318 12:22:16.642388    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:16 GMT
	I0318 12:22:16.642388    2644 round_trippers.go:580]     Audit-Id: 58c9d6b1-7846-4133-adac-48ec6835599b
	I0318 12:22:16.642388    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:16.642388    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:16.642388    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:16.642388    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:16.642609    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"627","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3020 chars]
	I0318 12:22:17.141325    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:22:17.141325    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:17.141428    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:17.141428    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:17.144716    2644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:22:17.144716    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:17.144716    2644 round_trippers.go:580]     Content-Length: 4044
	I0318 12:22:17.145535    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:17 GMT
	I0318 12:22:17.145535    2644 round_trippers.go:580]     Audit-Id: 1a1263c4-6033-4f36-9a69-4ebef3ac504f
	I0318 12:22:17.145535    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:17.145535    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:17.145535    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:17.145535    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:17.145835    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"627","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3020 chars]
	I0318 12:22:17.630391    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:22:17.630391    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:17.630391    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:17.630391    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:17.635588    2644 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:22:17.635588    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:17.635588    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:17 GMT
	I0318 12:22:17.635588    2644 round_trippers.go:580]     Audit-Id: 8a1bb062-42ec-45a0-8340-46c9e2341fe2
	I0318 12:22:17.635588    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:17.635588    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:17.635588    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:17.635588    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:17.635588    2644 round_trippers.go:580]     Content-Length: 4044
	I0318 12:22:17.636135    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"627","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3020 chars]
	I0318 12:22:18.136626    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:22:18.136626    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:18.136626    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:18.136626    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:18.141235    2644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:22:18.142121    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:18.142121    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:18.142121    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:18.142121    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:18.142121    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:18.142121    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:18 GMT
	I0318 12:22:18.142121    2644 round_trippers.go:580]     Audit-Id: 5de1bb69-138a-4497-a6bf-323dad5aa133
	I0318 12:22:18.142422    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"633","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3129 chars]
	I0318 12:22:18.142867    2644 node_ready.go:53] node "multinode-642600-m02" has status "Ready":"False"
	I0318 12:22:18.639567    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:22:18.639652    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:18.639652    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:18.639652    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:18.645645    2644 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:22:18.645645    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:18.645645    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:18 GMT
	I0318 12:22:18.645645    2644 round_trippers.go:580]     Audit-Id: e34c433c-09bc-4875-ba9f-7ed1ae8d0d30
	I0318 12:22:18.645645    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:18.645645    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:18.645645    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:18.645645    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:18.646328    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"633","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3129 chars]
	I0318 12:22:19.143427    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:22:19.143630    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:19.143630    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:19.143630    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:19.148903    2644 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:22:19.148903    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:19.148903    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:19.148903    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:19 GMT
	I0318 12:22:19.148903    2644 round_trippers.go:580]     Audit-Id: a4a177c6-b585-495a-89d6-93748056f42b
	I0318 12:22:19.148903    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:19.148903    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:19.148903    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:19.149589    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"633","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3129 chars]
	I0318 12:22:19.631441    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:22:19.631526    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:19.631526    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:19.631526    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:19.638887    2644 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 12:22:19.638887    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:19.638887    2644 round_trippers.go:580]     Audit-Id: 289ce6be-e68d-4233-acbf-0f46d90a4da9
	I0318 12:22:19.638887    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:19.638887    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:19.638887    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:19.639024    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:19.639024    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:19 GMT
	I0318 12:22:19.639089    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"633","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3129 chars]
	I0318 12:22:20.133373    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:22:20.133634    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:20.133634    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:20.133634    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:20.137978    2644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:22:20.138177    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:20.138177    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:20.138177    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:20.138177    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:20 GMT
	I0318 12:22:20.138177    2644 round_trippers.go:580]     Audit-Id: bff8f97b-4d29-4964-9161-9b73faee8142
	I0318 12:22:20.138177    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:20.138177    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:20.138413    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"633","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3129 chars]
	I0318 12:22:20.638884    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:22:20.638884    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:20.638884    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:20.638884    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:20.644304    2644 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:22:20.644304    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:20.644304    2644 round_trippers.go:580]     Audit-Id: 6bfdce22-932a-44cc-9f83-b0fcdc03f775
	I0318 12:22:20.644304    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:20.644304    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:20.644304    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:20.644551    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:20.644551    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:20 GMT
	I0318 12:22:20.644742    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"633","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3129 chars]
	I0318 12:22:20.644742    2644 node_ready.go:53] node "multinode-642600-m02" has status "Ready":"False"
	I0318 12:22:21.143409    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:22:21.143409    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:21.143409    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:21.143409    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:21.148179    2644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:22:21.148413    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:21.148413    2644 round_trippers.go:580]     Audit-Id: c874e825-dba4-48eb-8099-f4efad8d8a86
	I0318 12:22:21.148413    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:21.148413    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:21.148413    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:21.148413    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:21.148413    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:21 GMT
	I0318 12:22:21.148802    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"633","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3129 chars]
	I0318 12:22:21.633526    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:22:21.633526    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:21.633526    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:21.633526    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:21.638161    2644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:22:21.638161    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:21.638161    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:21.638161    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:21.638161    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:21.638161    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:21.638161    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:21 GMT
	I0318 12:22:21.638161    2644 round_trippers.go:580]     Audit-Id: 33d15e2c-cb5a-4a29-a31c-2ee073f0ae45
	I0318 12:22:21.638161    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"633","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3129 chars]
	I0318 12:22:22.141785    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:22:22.141901    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:22.141901    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:22.141901    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:22.147787    2644 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:22:22.148002    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:22.148002    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:22.148002    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:22.148002    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:22.148002    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:22.148002    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:22 GMT
	I0318 12:22:22.148002    2644 round_trippers.go:580]     Audit-Id: 9dab2f46-6c0b-48f5-9bb4-5599f03cf4b3
	I0318 12:22:22.148251    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"633","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3129 chars]
	I0318 12:22:22.634833    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:22:22.635028    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:22.635028    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:22.635028    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:22.639372    2644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:22:22.639372    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:22.639372    2644 round_trippers.go:580]     Audit-Id: 34cf41ba-1e42-4ad6-98ef-b179a3c19c65
	I0318 12:22:22.639513    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:22.639513    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:22.639513    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:22.639513    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:22.639513    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:22 GMT
	I0318 12:22:22.639677    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"633","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3129 chars]
	I0318 12:22:23.140601    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:22:23.140601    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:23.140601    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:23.140601    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:23.214492    2644 round_trippers.go:574] Response Status: 200 OK in 73 milliseconds
	I0318 12:22:23.214492    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:23.214492    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:23.214492    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:23.214492    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:23.215432    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:23.215432    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:23 GMT
	I0318 12:22:23.215432    2644 round_trippers.go:580]     Audit-Id: 66f5eb8f-bb79-4420-b8c3-51854173de56
	I0318 12:22:23.216478    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"640","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0318 12:22:23.217629    2644 node_ready.go:53] node "multinode-642600-m02" has status "Ready":"False"
	I0318 12:22:23.629193    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:22:23.629256    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:23.629256    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:23.629256    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:23.632769    2644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:22:23.633877    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:23.633877    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:23.633877    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:23.633877    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:23.633877    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:23 GMT
	I0318 12:22:23.633877    2644 round_trippers.go:580]     Audit-Id: c472daeb-fd73-4938-ab59-18b3106c070b
	I0318 12:22:23.633877    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:23.634318    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"640","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0318 12:22:24.136238    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:22:24.136312    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:24.136312    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:24.136392    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:24.235828    2644 round_trippers.go:574] Response Status: 200 OK in 99 milliseconds
	I0318 12:22:24.235828    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:24.235828    2644 round_trippers.go:580]     Audit-Id: 26fa2e7b-4929-4c51-8a68-0654d7ffcec5
	I0318 12:22:24.235828    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:24.235828    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:24.235828    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:24.235828    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:24.235828    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:24 GMT
	I0318 12:22:24.236177    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"640","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0318 12:22:24.640129    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:22:24.640183    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:24.640234    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:24.640234    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:24.679296    2644 round_trippers.go:574] Response Status: 200 OK in 39 milliseconds
	I0318 12:22:24.679296    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:24.680011    2644 round_trippers.go:580]     Audit-Id: d90bc4b6-7b69-4966-a3b1-1212c333c1c4
	I0318 12:22:24.680011    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:24.680084    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:24.680084    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:24.680084    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:24.680124    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:24 GMT
	I0318 12:22:24.680411    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"640","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0318 12:22:25.129883    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:22:25.130154    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:25.130154    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:25.130154    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:25.133062    2644 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:22:25.133062    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:25.133062    2644 round_trippers.go:580]     Audit-Id: 4f52c06f-c508-41d4-bae7-7775da79f124
	I0318 12:22:25.133062    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:25.133062    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:25.133062    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:25.133062    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:25.133062    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:25 GMT
	I0318 12:22:25.133062    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"640","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0318 12:22:25.638387    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:22:25.638454    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:25.638510    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:25.638510    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:25.642823    2644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:22:25.642823    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:25.642823    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:25.642823    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:25.643825    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:25 GMT
	I0318 12:22:25.643825    2644 round_trippers.go:580]     Audit-Id: 9cf955c8-0a65-405b-a5cd-c858aee3a9c2
	I0318 12:22:25.643825    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:25.643825    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:25.643825    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"640","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0318 12:22:25.645850    2644 node_ready.go:53] node "multinode-642600-m02" has status "Ready":"False"
	I0318 12:22:26.143154    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:22:26.143284    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:26.143284    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:26.143284    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:26.147115    2644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:22:26.147115    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:26.147115    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:26 GMT
	I0318 12:22:26.148071    2644 round_trippers.go:580]     Audit-Id: 2ef8a6bd-0315-4883-80c8-323cd136f72e
	I0318 12:22:26.148071    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:26.148071    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:26.148071    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:26.148071    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:26.148323    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"640","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0318 12:22:26.632710    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:22:26.633085    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:26.633085    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:26.633085    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:26.636704    2644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:22:26.637096    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:26.637096    2644 round_trippers.go:580]     Audit-Id: 47b2338f-7953-4992-87e0-7599b6405249
	I0318 12:22:26.637209    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:26.637250    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:26.637320    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:26.637363    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:26.637363    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:26 GMT
	I0318 12:22:26.637419    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"640","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0318 12:22:27.139824    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:22:27.139824    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:27.139824    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:27.139824    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:27.144581    2644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:22:27.144581    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:27.144581    2644 round_trippers.go:580]     Audit-Id: ced86824-0268-467a-af7b-5d3932aadf91
	I0318 12:22:27.144581    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:27.144581    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:27.144581    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:27.144581    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:27.144581    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:27 GMT
	I0318 12:22:27.144581    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"640","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0318 12:22:27.632105    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:22:27.632151    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:27.632185    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:27.632213    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:27.637192    2644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:22:27.637192    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:27.637192    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:27 GMT
	I0318 12:22:27.637260    2644 round_trippers.go:580]     Audit-Id: f45bf402-a725-4b88-934f-a34c629470a2
	I0318 12:22:27.637260    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:27.637260    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:27.637260    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:27.637260    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:27.637460    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"640","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0318 12:22:28.138345    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:22:28.138409    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:28.138409    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:28.138409    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:28.142892    2644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:22:28.142892    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:28.142892    2644 round_trippers.go:580]     Audit-Id: 5f92fbbd-8b21-4d09-b108-6f1012ba20d5
	I0318 12:22:28.142892    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:28.142892    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:28.142892    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:28.142892    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:28.142892    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:28 GMT
	I0318 12:22:28.143899    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"640","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0318 12:22:28.143899    2644 node_ready.go:53] node "multinode-642600-m02" has status "Ready":"False"
	I0318 12:22:28.629640    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:22:28.629640    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:28.629640    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:28.629640    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:28.637301    2644 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 12:22:28.637301    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:28.637702    2644 round_trippers.go:580]     Audit-Id: 6e108278-aa43-4774-b23d-d6528975c431
	I0318 12:22:28.637702    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:28.637702    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:28.637702    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:28.637702    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:28.637702    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:28 GMT
	I0318 12:22:28.637879    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"640","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0318 12:22:29.136992    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:22:29.137056    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:29.137056    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:29.137056    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:29.140806    2644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:22:29.141078    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:29.141078    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:29.141078    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:29.141078    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:29.141078    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:29 GMT
	I0318 12:22:29.141078    2644 round_trippers.go:580]     Audit-Id: 97139a95-81c8-45cd-8d2e-2023a1938031
	I0318 12:22:29.141078    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:29.141365    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"640","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0318 12:22:29.630445    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:22:29.630445    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:29.630445    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:29.630560    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:29.881268    2644 round_trippers.go:574] Response Status: 200 OK in 250 milliseconds
	I0318 12:22:29.881268    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:29.881268    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:29.881268    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:29.881268    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:29 GMT
	I0318 12:22:29.881268    2644 round_trippers.go:580]     Audit-Id: 516b1c5c-3fd8-46c7-8143-33d7dd861461
	I0318 12:22:29.881268    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:29.881268    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:29.881535    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"640","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0318 12:22:30.131331    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:22:30.131331    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:30.131331    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:30.131331    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:30.134977    2644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:22:30.134977    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:30.134977    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:30 GMT
	I0318 12:22:30.134977    2644 round_trippers.go:580]     Audit-Id: 74d0ffad-d121-43f6-9526-6a6d24099505
	I0318 12:22:30.134977    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:30.134977    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:30.134977    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:30.135087    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:30.135244    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"640","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0318 12:22:30.630231    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:22:30.630231    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:30.630231    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:30.630231    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:30.634824    2644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:22:30.634884    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:30.634884    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:30.634884    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:30 GMT
	I0318 12:22:30.634884    2644 round_trippers.go:580]     Audit-Id: 707b0e5b-3d08-48b3-bd99-34ecfc8feeff
	I0318 12:22:30.634884    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:30.634884    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:30.634884    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:30.634884    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"640","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0318 12:22:30.635672    2644 node_ready.go:53] node "multinode-642600-m02" has status "Ready":"False"
	I0318 12:22:31.134073    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:22:31.134073    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:31.134073    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:31.134073    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:31.137696    2644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:22:31.141334    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:31.141421    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:31.141421    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:31 GMT
	I0318 12:22:31.141421    2644 round_trippers.go:580]     Audit-Id: 2a073adc-a49d-4a51-9815-556ce0fdc04e
	I0318 12:22:31.141421    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:31.141421    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:31.141421    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:31.141718    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"640","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0318 12:22:31.639824    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:22:31.639824    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:31.639824    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:31.639908    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:31.643265    2644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:22:31.643265    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:31.643265    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:31.643265    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:31.643265    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:31 GMT
	I0318 12:22:31.643265    2644 round_trippers.go:580]     Audit-Id: c5a5115f-8c3a-400f-b705-15ad2efc995a
	I0318 12:22:31.643265    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:31.643265    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:31.644399    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"640","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0318 12:22:32.129730    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:22:32.129791    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:32.129791    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:32.129791    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:32.133411    2644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:22:32.133411    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:32.133411    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:32.133411    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:32.133411    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:32 GMT
	I0318 12:22:32.133411    2644 round_trippers.go:580]     Audit-Id: a15d9228-04ab-48e9-bdc4-da306ec4b30c
	I0318 12:22:32.133411    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:32.133411    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:32.134389    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"640","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0318 12:22:32.635751    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:22:32.635751    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:32.635868    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:32.636199    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:32.639916    2644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:22:32.639916    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:32.639916    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:32.639916    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:32.639916    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:32 GMT
	I0318 12:22:32.639916    2644 round_trippers.go:580]     Audit-Id: 071d3815-006b-41c7-86f0-761d080ee40b
	I0318 12:22:32.639916    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:32.639916    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:32.641038    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"640","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0318 12:22:32.641570    2644 node_ready.go:53] node "multinode-642600-m02" has status "Ready":"False"
	I0318 12:22:33.137125    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:22:33.137125    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:33.137125    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:33.137242    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:33.141575    2644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:22:33.141575    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:33.141575    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:33.141575    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:33 GMT
	I0318 12:22:33.141575    2644 round_trippers.go:580]     Audit-Id: 9bd0e3d6-61ba-4f5e-9527-f2273e8d1814
	I0318 12:22:33.141575    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:33.141575    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:33.141575    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:33.142438    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"640","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0318 12:22:33.641509    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:22:33.641509    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:33.641509    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:33.641509    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:33.645140    2644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:22:33.645140    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:33.645140    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:33.645140    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:33.645140    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:33 GMT
	I0318 12:22:33.645627    2644 round_trippers.go:580]     Audit-Id: db782527-2deb-448e-b327-973987c8709b
	I0318 12:22:33.645627    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:33.645627    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:33.646050    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"640","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0318 12:22:34.131967    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:22:34.132217    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:34.132217    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:34.132217    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:34.137088    2644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:22:34.137088    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:34.137088    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:34.137088    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:34 GMT
	I0318 12:22:34.138107    2644 round_trippers.go:580]     Audit-Id: a0defa3b-cbee-4550-ab11-bdd474c6ace2
	I0318 12:22:34.138107    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:34.138107    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:34.138107    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:34.139131    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"640","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0318 12:22:34.636243    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:22:34.636328    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:34.636328    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:34.636328    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:34.641061    2644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:22:34.641061    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:34.641061    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:34.641061    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:34.641374    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:34 GMT
	I0318 12:22:34.641374    2644 round_trippers.go:580]     Audit-Id: a2be8c9f-3b7b-4ac3-bd11-9ca270f2a420
	I0318 12:22:34.641374    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:34.641374    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:34.641738    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"664","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3264 chars]
	I0318 12:22:34.642218    2644 node_ready.go:49] node "multinode-642600-m02" has status "Ready":"True"
	I0318 12:22:34.642218    2644 node_ready.go:38] duration metric: took 20.5142589s for node "multinode-642600-m02" to be "Ready" ...
	I0318 12:22:34.642218    2644 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 12:22:34.642485    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/namespaces/kube-system/pods
	I0318 12:22:34.642546    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:34.642546    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:34.642546    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:34.647672    2644 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:22:34.647672    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:34.647672    2644 round_trippers.go:580]     Audit-Id: 1a250748-787e-41b6-af4e-bb432a228aef
	I0318 12:22:34.647672    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:34.647672    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:34.647672    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:34.647672    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:34.648096    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:34 GMT
	I0318 12:22:34.649818    2644 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"664"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"457","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67522 chars]
	I0318 12:22:34.653963    2644 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fgn7v" in "kube-system" namespace to be "Ready" ...
	I0318 12:22:34.654177    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:22:34.654177    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:34.654177    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:34.654250    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:34.657464    2644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:22:34.658150    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:34.658150    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:34.658150    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:34 GMT
	I0318 12:22:34.658150    2644 round_trippers.go:580]     Audit-Id: 5f1a4f10-4aeb-4ab5-82b3-bfef6ecb41f0
	I0318 12:22:34.658150    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:34.658150    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:34.658150    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:34.658513    2644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"457","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6284 chars]
	I0318 12:22:34.658832    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:22:34.658832    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:34.658832    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:34.658832    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:34.661835    2644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:22:34.661835    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:34.661835    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:34.661835    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:34 GMT
	I0318 12:22:34.661835    2644 round_trippers.go:580]     Audit-Id: d78bdda4-d1d6-4a59-a98e-39df94c8ad5a
	I0318 12:22:34.661835    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:34.661835    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:34.661835    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:34.662797    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"464","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0318 12:22:34.662847    2644 pod_ready.go:92] pod "coredns-5dd5756b68-fgn7v" in "kube-system" namespace has status "Ready":"True"
	I0318 12:22:34.662847    2644 pod_ready.go:81] duration metric: took 8.8838ms for pod "coredns-5dd5756b68-fgn7v" in "kube-system" namespace to be "Ready" ...
	I0318 12:22:34.662847    2644 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-642600" in "kube-system" namespace to be "Ready" ...
	I0318 12:22:34.662847    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-642600
	I0318 12:22:34.662847    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:34.662847    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:34.662847    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:34.665701    2644 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:22:34.665701    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:34.665701    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:34.665701    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:34.665701    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:34.665701    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:34.666358    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:34 GMT
	I0318 12:22:34.666358    2644 round_trippers.go:580]     Audit-Id: 5fe1b717-4875-4ce6-8ecb-a205b1574ac3
	I0318 12:22:34.666477    2644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-642600","namespace":"kube-system","uid":"237133d7-6f1a-42ee-8cf2-a2d7564d67fc","resourceVersion":"418","creationTimestamp":"2024-03-18T12:18:51Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.151.112:2379","kubernetes.io/config.hash":"ec96a596e22f5afedbd92a854d1b8bec","kubernetes.io/config.mirror":"ec96a596e22f5afedbd92a854d1b8bec","kubernetes.io/config.seen":"2024-03-18T12:18:50.896439006Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:18:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5872 chars]
	I0318 12:22:34.666893    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:22:34.666893    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:34.666893    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:34.666893    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:34.670546    2644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:22:34.670546    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:34.670546    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:34.670546    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:34.670546    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:34 GMT
	I0318 12:22:34.670613    2644 round_trippers.go:580]     Audit-Id: f55e641b-c75e-45e1-9654-9d61e377da23
	I0318 12:22:34.670613    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:34.670613    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:34.670660    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"464","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0318 12:22:34.671250    2644 pod_ready.go:92] pod "etcd-multinode-642600" in "kube-system" namespace has status "Ready":"True"
	I0318 12:22:34.671250    2644 pod_ready.go:81] duration metric: took 8.4024ms for pod "etcd-multinode-642600" in "kube-system" namespace to be "Ready" ...
	I0318 12:22:34.671250    2644 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-642600" in "kube-system" namespace to be "Ready" ...
	I0318 12:22:34.671374    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-642600
	I0318 12:22:34.671430    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:34.671507    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:34.671507    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:34.676753    2644 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:22:34.676753    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:34.676753    2644 round_trippers.go:580]     Audit-Id: 8a7b122c-b1ac-4617-b630-4c942d86faff
	I0318 12:22:34.676753    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:34.676753    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:34.676753    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:34.676753    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:34.676753    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:34 GMT
	I0318 12:22:34.676753    2644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-642600","namespace":"kube-system","uid":"4aa98cb9-f6ab-40b3-8c15-235ba4e09985","resourceVersion":"417","creationTimestamp":"2024-03-18T12:18:51Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.25.151.112:8443","kubernetes.io/config.hash":"d04d3e415061983b742e6c14f1a5f562","kubernetes.io/config.mirror":"d04d3e415061983b742e6c14f1a5f562","kubernetes.io/config.seen":"2024-03-18T12:18:50.896431006Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:18:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7408 chars]
	I0318 12:22:34.677957    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:22:34.677957    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:34.677957    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:34.677957    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:34.682930    2644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:22:34.682930    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:34.683018    2644 round_trippers.go:580]     Audit-Id: e0b3de7a-55f4-4aa8-8959-a455684df16e
	I0318 12:22:34.683018    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:34.683018    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:34.683018    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:34.683018    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:34.683018    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:34 GMT
	I0318 12:22:34.683237    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"464","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0318 12:22:34.683643    2644 pod_ready.go:92] pod "kube-apiserver-multinode-642600" in "kube-system" namespace has status "Ready":"True"
	I0318 12:22:34.683681    2644 pod_ready.go:81] duration metric: took 12.4312ms for pod "kube-apiserver-multinode-642600" in "kube-system" namespace to be "Ready" ...
	I0318 12:22:34.683681    2644 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-642600" in "kube-system" namespace to be "Ready" ...
	I0318 12:22:34.683681    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-642600
	I0318 12:22:34.683681    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:34.683681    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:34.683681    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:34.686341    2644 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:22:34.686341    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:34.687285    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:34.687285    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:34.687285    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:34.687285    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:34.687285    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:34 GMT
	I0318 12:22:34.687285    2644 round_trippers.go:580]     Audit-Id: c185cede-33f2-412a-a58a-e55184ee1422
	I0318 12:22:34.687285    2644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-642600","namespace":"kube-system","uid":"1dd2a576-c5a0-44e5-b194-545e8b18962c","resourceVersion":"415","creationTimestamp":"2024-03-18T12:18:51Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a1608bc774d0b3e96e1b6fbbded5cb52","kubernetes.io/config.mirror":"a1608bc774d0b3e96e1b6fbbded5cb52","kubernetes.io/config.seen":"2024-03-18T12:18:50.896437006Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:18:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6973 chars]
	I0318 12:22:34.688269    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:22:34.688269    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:34.688269    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:34.688269    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:34.692266    2644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:22:34.693270    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:34.693270    2644 round_trippers.go:580]     Audit-Id: 1a075dd1-e80c-47a4-b67b-31eba2913d79
	I0318 12:22:34.693270    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:34.693270    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:34.693270    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:34.693270    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:34.693270    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:34 GMT
	I0318 12:22:34.693270    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"464","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0318 12:22:34.693270    2644 pod_ready.go:92] pod "kube-controller-manager-multinode-642600" in "kube-system" namespace has status "Ready":"True"
	I0318 12:22:34.693270    2644 pod_ready.go:81] duration metric: took 9.5893ms for pod "kube-controller-manager-multinode-642600" in "kube-system" namespace to be "Ready" ...
	I0318 12:22:34.693270    2644 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4dg79" in "kube-system" namespace to be "Ready" ...
	I0318 12:22:34.838699    2644 request.go:629] Waited for 145.2749ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.151.112:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4dg79
	I0318 12:22:34.838781    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4dg79
	I0318 12:22:34.838853    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:34.838853    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:34.838853    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:34.843670    2644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:22:34.843670    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:34.843670    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:34.843670    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:34.843670    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:34 GMT
	I0318 12:22:34.843670    2644 round_trippers.go:580]     Audit-Id: d33123c6-aec7-4c58-b00d-a02541e3c241
	I0318 12:22:34.843670    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:34.843670    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:34.843927    2644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4dg79","generateName":"kube-proxy-","namespace":"kube-system","uid":"449242c2-ad12-4da5-b339-3be7ab8a9b16","resourceVersion":"409","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"158ddb85-85d3-4864-bdec-d4555b6c7434","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"158ddb85-85d3-4864-bdec-d4555b6c7434\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0318 12:22:35.041148    2644 request.go:629] Waited for 196.4634ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:22:35.041595    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:22:35.041595    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:35.041595    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:35.041595    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:35.046247    2644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:22:35.046247    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:35.046247    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:35.046247    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:35.046247    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:35.046247    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:35 GMT
	I0318 12:22:35.046247    2644 round_trippers.go:580]     Audit-Id: 29029d60-4375-4f22-89c3-7bfb684f3c30
	I0318 12:22:35.046247    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:35.046247    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"464","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0318 12:22:35.047064    2644 pod_ready.go:92] pod "kube-proxy-4dg79" in "kube-system" namespace has status "Ready":"True"
	I0318 12:22:35.047064    2644 pod_ready.go:81] duration metric: took 353.7919ms for pod "kube-proxy-4dg79" in "kube-system" namespace to be "Ready" ...
	I0318 12:22:35.047128    2644 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vts9f" in "kube-system" namespace to be "Ready" ...
	I0318 12:22:35.243055    2644 request.go:629] Waited for 195.6555ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.151.112:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vts9f
	I0318 12:22:35.243303    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vts9f
	I0318 12:22:35.243303    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:35.243303    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:35.243303    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:35.249754    2644 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:22:35.249754    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:35.249754    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:35.249754    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:35.249754    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:35.249754    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:35 GMT
	I0318 12:22:35.249754    2644 round_trippers.go:580]     Audit-Id: 56b97b09-bc2e-4b1f-91ff-864ac53e598b
	I0318 12:22:35.249754    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:35.250539    2644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vts9f","generateName":"kube-proxy-","namespace":"kube-system","uid":"9545be8f-07fd-49dd-99bd-e9e976e65e7b","resourceVersion":"648","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"158ddb85-85d3-4864-bdec-d4555b6c7434","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"158ddb85-85d3-4864-bdec-d4555b6c7434\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5545 chars]
	I0318 12:22:35.445528    2644 request.go:629] Waited for 194.9027ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.151.112:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:22:35.445903    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:22:35.445903    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:35.445903    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:35.445903    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:35.453873    2644 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 12:22:35.453873    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:35.453873    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:35.453873    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:35.454291    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:35 GMT
	I0318 12:22:35.454291    2644 round_trippers.go:580]     Audit-Id: 871973c5-51b3-4cd9-a061-f518f69a40aa
	I0318 12:22:35.454291    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:35.454291    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:35.454537    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"664","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3264 chars]
	I0318 12:22:35.455007    2644 pod_ready.go:92] pod "kube-proxy-vts9f" in "kube-system" namespace has status "Ready":"True"
	I0318 12:22:35.455007    2644 pod_ready.go:81] duration metric: took 407.8765ms for pod "kube-proxy-vts9f" in "kube-system" namespace to be "Ready" ...
	I0318 12:22:35.455067    2644 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-642600" in "kube-system" namespace to be "Ready" ...
	I0318 12:22:35.646595    2644 request.go:629] Waited for 191.3246ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.151.112:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-642600
	I0318 12:22:35.646790    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-642600
	I0318 12:22:35.646790    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:35.646944    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:35.646944    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:35.651330    2644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:22:35.651398    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:35.651398    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:35 GMT
	I0318 12:22:35.651398    2644 round_trippers.go:580]     Audit-Id: 8f328ae3-5673-4362-aed4-e85e1af7e43f
	I0318 12:22:35.651398    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:35.651398    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:35.651398    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:35.651398    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:35.651720    2644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-642600","namespace":"kube-system","uid":"52e29d3b-d6e9-4109-916d-74123a2ab190","resourceVersion":"414","creationTimestamp":"2024-03-18T12:18:51Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cf50844b540be8ed0b3e767db413ac8f","kubernetes.io/config.mirror":"cf50844b540be8ed0b3e767db413ac8f","kubernetes.io/config.seen":"2024-03-18T12:18:50.896438106Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:18:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4703 chars]
	I0318 12:22:35.850369    2644 request.go:629] Waited for 197.5999ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:22:35.850532    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes/multinode-642600
	I0318 12:22:35.850594    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:35.850594    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:35.850653    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:35.854006    2644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:22:35.854787    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:35.854906    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:35.855036    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:35 GMT
	I0318 12:22:35.855036    2644 round_trippers.go:580]     Audit-Id: c9a02e41-4f06-4bdb-855e-136a2a4ee299
	I0318 12:22:35.855036    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:35.855036    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:35.855036    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:35.855036    2644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"464","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0318 12:22:35.855724    2644 pod_ready.go:92] pod "kube-scheduler-multinode-642600" in "kube-system" namespace has status "Ready":"True"
	I0318 12:22:35.855724    2644 pod_ready.go:81] duration metric: took 400.6546ms for pod "kube-scheduler-multinode-642600" in "kube-system" namespace to be "Ready" ...
	I0318 12:22:35.855724    2644 pod_ready.go:38] duration metric: took 1.2133641s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 12:22:35.856261    2644 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 12:22:35.873545    2644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:22:35.903272    2644 system_svc.go:56] duration metric: took 47.0109ms WaitForService to wait for kubelet
	I0318 12:22:35.903272    2644 kubeadm.go:576] duration metric: took 22.1028538s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 12:22:35.903272    2644 node_conditions.go:102] verifying NodePressure condition ...
	I0318 12:22:36.051614    2644 request.go:629] Waited for 148.3404ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.151.112:8443/api/v1/nodes
	I0318 12:22:36.051881    2644 round_trippers.go:463] GET https://172.25.151.112:8443/api/v1/nodes
	I0318 12:22:36.051881    2644 round_trippers.go:469] Request Headers:
	I0318 12:22:36.051881    2644 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:22:36.051881    2644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:22:36.056548    2644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:22:36.056548    2644 round_trippers.go:577] Response Headers:
	I0318 12:22:36.056548    2644 round_trippers.go:580]     Content-Type: application/json
	I0318 12:22:36.056548    2644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:22:36.056548    2644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:22:36.056548    2644 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:22:36 GMT
	I0318 12:22:36.056548    2644 round_trippers.go:580]     Audit-Id: e7bbcb9a-4b85-4bd6-8464-3f05cbdfecf9
	I0318 12:22:36.056548    2644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:22:36.057260    2644 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"666"},"items":[{"metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"464","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9268 chars]
	I0318 12:22:36.057961    2644 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 12:22:36.057995    2644 node_conditions.go:123] node cpu capacity is 2
	I0318 12:22:36.058047    2644 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 12:22:36.058047    2644 node_conditions.go:123] node cpu capacity is 2
	I0318 12:22:36.058047    2644 node_conditions.go:105] duration metric: took 154.7733ms to run NodePressure ...
	I0318 12:22:36.058079    2644 start.go:240] waiting for startup goroutines ...
	I0318 12:22:36.058141    2644 start.go:254] writing updated cluster config ...
	I0318 12:22:36.072808    2644 ssh_runner.go:195] Run: rm -f paused
	I0318 12:22:36.236588    2644 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 12:22:36.240684    2644 out.go:177] * Done! kubectl is now configured to use "multinode-642600" cluster and "default" namespace by default
	
	
	==> Docker <==
	Mar 18 12:19:16 multinode-642600 dockerd[1334]: time="2024-03-18T12:19:16.526008627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 12:19:16 multinode-642600 cri-dockerd[1219]: time="2024-03-18T12:19:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3a9b4c05a5ccd5364b8dac2797803c98520c4f98df0fba77af7521af64a15152/resolv.conf as [nameserver 172.25.144.1]"
	Mar 18 12:19:16 multinode-642600 dockerd[1334]: time="2024-03-18T12:19:16.841100766Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 12:19:16 multinode-642600 dockerd[1334]: time="2024-03-18T12:19:16.842844874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 12:19:16 multinode-642600 dockerd[1334]: time="2024-03-18T12:19:16.842935975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 12:19:16 multinode-642600 dockerd[1334]: time="2024-03-18T12:19:16.843301777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 12:19:17 multinode-642600 dockerd[1334]: time="2024-03-18T12:19:17.921764807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 12:19:17 multinode-642600 dockerd[1334]: time="2024-03-18T12:19:17.922596611Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 12:19:17 multinode-642600 dockerd[1334]: time="2024-03-18T12:19:17.922971712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 12:19:17 multinode-642600 dockerd[1334]: time="2024-03-18T12:19:17.923504815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 12:19:18 multinode-642600 cri-dockerd[1219]: time="2024-03-18T12:19:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ed38da653fbefea9aeb0ebdb91f985394a7a792571704a4875018f5a6bc9abda/resolv.conf as [nameserver 172.25.144.1]"
	Mar 18 12:19:18 multinode-642600 dockerd[1334]: time="2024-03-18T12:19:18.387414250Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 12:19:18 multinode-642600 dockerd[1334]: time="2024-03-18T12:19:18.387708751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 12:19:18 multinode-642600 dockerd[1334]: time="2024-03-18T12:19:18.387777851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 12:19:18 multinode-642600 dockerd[1334]: time="2024-03-18T12:19:18.387897452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 12:23:03 multinode-642600 dockerd[1334]: time="2024-03-18T12:23:03.111519419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 12:23:03 multinode-642600 dockerd[1334]: time="2024-03-18T12:23:03.111774619Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 12:23:03 multinode-642600 dockerd[1334]: time="2024-03-18T12:23:03.111808719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 12:23:03 multinode-642600 dockerd[1334]: time="2024-03-18T12:23:03.112065519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 12:23:03 multinode-642600 cri-dockerd[1219]: time="2024-03-18T12:23:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/29bb4d534c2e2b00dfe907d4443637851e3c3348e31bf00939cd6efad71c4e2e/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Mar 18 12:23:04 multinode-642600 cri-dockerd[1219]: time="2024-03-18T12:23:04Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Mar 18 12:23:04 multinode-642600 dockerd[1334]: time="2024-03-18T12:23:04.804698569Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 12:23:04 multinode-642600 dockerd[1334]: time="2024-03-18T12:23:04.804923869Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 12:23:04 multinode-642600 dockerd[1334]: time="2024-03-18T12:23:04.804948469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 12:23:04 multinode-642600 dockerd[1334]: time="2024-03-18T12:23:04.808659881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a8dd2eacb7251       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   50 seconds ago      Running             busybox                   0                   29bb4d534c2e2       busybox-5b5d89c9d6-48qkw
	e81f1d2fdb360       ead0a4a53df89                                                                                         4 minutes ago       Running             coredns                   0                   ed38da653fbef       coredns-5dd5756b68-fgn7v
	996fb0f2ade69       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       0                   3a9b4c05a5ccd       storage-provisioner
	5cf42651cb21d       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              4 minutes ago       Running             kindnet-cni               0                   fef37141be6db       kindnet-kpt4f
	4bbad08fe59ac       83f6cc407eed8                                                                                         4 minutes ago       Running             kube-proxy                0                   2f4709a3a45a4       kube-proxy-4dg79
	301c80f8b38cb       73deb9a3f7025                                                                                         5 minutes ago       Running             etcd                      0                   aad98ae0cd7c7       etcd-multinode-642600
	a54be44369019       d058aa5ab969c                                                                                         5 minutes ago       Running             kube-controller-manager   0                   d766c4514f0bf       kube-controller-manager-multinode-642600
	47777d4c0b90d       e3db313c6dbc0                                                                                         5 minutes ago       Running             kube-scheduler            0                   3500a9f1ca84e       kube-scheduler-multinode-642600
	4b94d396876e5       7fe0e6f37db33                                                                                         5 minutes ago       Running             kube-apiserver            0                   f100b1062a569       kube-apiserver-multinode-642600
	
	
	==> coredns [e81f1d2fdb36] <==
	[INFO] 10.244.0.3:39370 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000181001s
	[INFO] 10.244.1.2:40318 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000302101s
	[INFO] 10.244.1.2:43523 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001489s
	[INFO] 10.244.1.2:47882 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001346s
	[INFO] 10.244.1.2:38222 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000057401s
	[INFO] 10.244.1.2:49068 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001253s
	[INFO] 10.244.1.2:35375 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000582s
	[INFO] 10.244.1.2:40933 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000179201s
	[INFO] 10.244.1.2:36014 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0002051s
	[INFO] 10.244.0.3:37733 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000265401s
	[INFO] 10.244.0.3:52912 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148001s
	[INFO] 10.244.0.3:33147 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000143701s
	[INFO] 10.244.0.3:49893 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000536s
	[INFO] 10.244.1.2:42681 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001221s
	[INFO] 10.244.1.2:41416 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143s
	[INFO] 10.244.1.2:58254 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000241501s
	[INFO] 10.244.1.2:35844 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000197201s
	[INFO] 10.244.0.3:33559 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102201s
	[INFO] 10.244.0.3:53963 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000158701s
	[INFO] 10.244.0.3:41406 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001297s
	[INFO] 10.244.0.3:34685 - 5 "PTR IN 1.144.25.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000264001s
	[INFO] 10.244.1.2:43312 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001178s
	[INFO] 10.244.1.2:55281 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000235501s
	[INFO] 10.244.1.2:34710 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000874s
	[INFO] 10.244.1.2:57686 - 5 "PTR IN 1.144.25.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000557s
	
	
	==> describe nodes <==
	Name:               multinode-642600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-642600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd
	                    minikube.k8s.io/name=multinode-642600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T12_18_52_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 12:18:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-642600
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 12:23:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 12:23:28 +0000   Mon, 18 Mar 2024 12:18:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 12:23:28 +0000   Mon, 18 Mar 2024 12:18:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 12:23:28 +0000   Mon, 18 Mar 2024 12:18:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 12:23:28 +0000   Mon, 18 Mar 2024 12:19:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.151.112
	  Hostname:    multinode-642600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 c4fed1c761f3489c82e506ca9c7ef454
	  System UUID:                8a1bcbab-f132-7f42-b33a-a7db97e0afe6
	  Boot ID:                    5c00a598-0db7-4e3f-9cb0-8300634030e1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-48qkw                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	  kube-system                 coredns-5dd5756b68-fgn7v                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m51s
	  kube-system                 etcd-multinode-642600                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m3s
	  kube-system                 kindnet-kpt4f                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m51s
	  kube-system                 kube-apiserver-multinode-642600             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m3s
	  kube-system                 kube-controller-manager-multinode-642600    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m3s
	  kube-system                 kube-proxy-4dg79                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m51s
	  kube-system                 kube-scheduler-multinode-642600             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m3s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m49s                  kube-proxy       
	  Normal  Starting                 5m13s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m13s (x8 over 5m13s)  kubelet          Node multinode-642600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m13s (x8 over 5m13s)  kubelet          Node multinode-642600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m13s (x7 over 5m13s)  kubelet          Node multinode-642600 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m4s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m3s                   kubelet          Node multinode-642600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m3s                   kubelet          Node multinode-642600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m3s                   kubelet          Node multinode-642600 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m3s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m52s                  node-controller  Node multinode-642600 event: Registered Node multinode-642600 in Controller
	  Normal  NodeReady                4m39s                  kubelet          Node multinode-642600 status is now: NodeReady
	
	
	Name:               multinode-642600-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-642600-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd
	                    minikube.k8s.io/name=multinode-642600
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T12_22_13_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 12:22:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-642600-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 12:23:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 12:23:14 +0000   Mon, 18 Mar 2024 12:22:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 12:23:14 +0000   Mon, 18 Mar 2024 12:22:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 12:23:14 +0000   Mon, 18 Mar 2024 12:22:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 12:23:14 +0000   Mon, 18 Mar 2024 12:22:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.159.102
	  Hostname:    multinode-642600-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 3840c114554e41ff9ded1410244d8aba
	  System UUID:                23dbf5b1-f940-4749-8caf-1ae12d869a30
	  Boot ID:                    9a3fcab5-beb6-4505-b112-82809850bba3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-hmhdf    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	  kube-system                 kindnet-d5llj               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      102s
	  kube-system                 kube-proxy-vts9f            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 89s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  102s (x5 over 103s)  kubelet          Node multinode-642600-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s (x5 over 103s)  kubelet          Node multinode-642600-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     102s (x5 over 103s)  kubelet          Node multinode-642600-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           97s                  node-controller  Node multinode-642600-m02 event: Registered Node multinode-642600-m02 in Controller
	  Normal  NodeReady                80s                  kubelet          Node multinode-642600-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.301468] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Mar18 12:17] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.197151] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[Mar18 12:18] systemd-fstab-generator[940]: Ignoring "noauto" option for root device
	[  +0.108829] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.559496] systemd-fstab-generator[978]: Ignoring "noauto" option for root device
	[  +0.201240] systemd-fstab-generator[990]: Ignoring "noauto" option for root device
	[  +0.252186] systemd-fstab-generator[1004]: Ignoring "noauto" option for root device
	[  +2.858565] systemd-fstab-generator[1172]: Ignoring "noauto" option for root device
	[  +0.226883] systemd-fstab-generator[1184]: Ignoring "noauto" option for root device
	[  +0.224804] systemd-fstab-generator[1196]: Ignoring "noauto" option for root device
	[  +0.296633] systemd-fstab-generator[1211]: Ignoring "noauto" option for root device
	[ +13.615429] systemd-fstab-generator[1320]: Ignoring "noauto" option for root device
	[  +0.114692] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.392890] systemd-fstab-generator[1512]: Ignoring "noauto" option for root device
	[  +7.356982] systemd-fstab-generator[1782]: Ignoring "noauto" option for root device
	[  +0.123237] kauditd_printk_skb: 73 callbacks suppressed
	[ +10.390271] systemd-fstab-generator[2799]: Ignoring "noauto" option for root device
	[  +0.161576] kauditd_printk_skb: 62 callbacks suppressed
	[Mar18 12:19] systemd-fstab-generator[4316]: Ignoring "noauto" option for root device
	[  +0.245462] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.782154] kauditd_printk_skb: 51 callbacks suppressed
	[Mar18 12:23] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [301c80f8b38c] <==
	{"level":"info","ts":"2024-03-18T12:18:44.047316Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T12:18:44.047735Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T12:18:44.049968Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-18T12:18:44.053292Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.25.151.112:2379"}
	{"level":"info","ts":"2024-03-18T12:18:44.047935Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T12:18:44.063941Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-18T12:18:44.06169Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"31713adf8492fbc4","local-member-id":"78764271becab2d0","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T12:18:44.064616Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T12:18:44.064921Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T12:19:32.13743Z","caller":"traceutil/trace.go:171","msg":"trace[787848307] transaction","detail":"{read_only:false; response_revision:472; number_of_response:1; }","duration":"146.327063ms","start":"2024-03-18T12:19:31.991063Z","end":"2024-03-18T12:19:32.13739Z","steps":["trace[787848307] 'process raft request'  (duration: 146.056463ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T12:19:33.521781Z","caller":"traceutil/trace.go:171","msg":"trace[1180351717] transaction","detail":"{read_only:false; response_revision:473; number_of_response:1; }","duration":"159.385884ms","start":"2024-03-18T12:19:33.362378Z","end":"2024-03-18T12:19:33.521763Z","steps":["trace[1180351717] 'process raft request'  (duration: 159.207584ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T12:20:29.634789Z","caller":"traceutil/trace.go:171","msg":"trace[1727940677] transaction","detail":"{read_only:false; response_revision:517; number_of_response:1; }","duration":"101.995915ms","start":"2024-03-18T12:20:29.532771Z","end":"2024-03-18T12:20:29.634767Z","steps":["trace[1727940677] 'process raft request'  (duration: 101.835914ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T12:22:23.041052Z","caller":"traceutil/trace.go:171","msg":"trace[1431275931] linearizableReadLoop","detail":"{readStateIndex:694; appliedIndex:693; }","duration":"125.082492ms","start":"2024-03-18T12:22:22.915948Z","end":"2024-03-18T12:22:23.041031Z","steps":["trace[1431275931] 'read index received'  (duration: 124.353791ms)","trace[1431275931] 'applied index is now lower than readState.Index'  (duration: 728.001µs)"],"step_count":2}
	{"level":"info","ts":"2024-03-18T12:22:23.041683Z","caller":"traceutil/trace.go:171","msg":"trace[535763675] transaction","detail":"{read_only:false; response_revision:639; number_of_response:1; }","duration":"164.737921ms","start":"2024-03-18T12:22:22.876783Z","end":"2024-03-18T12:22:23.041521Z","steps":["trace[535763675] 'process raft request'  (duration: 164.04822ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T12:22:23.044117Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.856292ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-18T12:22:23.044254Z","caller":"traceutil/trace.go:171","msg":"trace[492125944] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:639; }","duration":"128.313994ms","start":"2024-03-18T12:22:22.915928Z","end":"2024-03-18T12:22:23.044242Z","steps":["trace[492125944] 'agreement among raft nodes before linearized reading'  (duration: 125.784892ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T12:22:23.211388Z","caller":"traceutil/trace.go:171","msg":"trace[1350898351] transaction","detail":"{read_only:false; response_revision:640; number_of_response:1; }","duration":"133.289397ms","start":"2024-03-18T12:22:23.078084Z","end":"2024-03-18T12:22:23.211373Z","steps":["trace[1350898351] 'process raft request'  (duration: 132.512096ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T12:22:29.436897Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.5674ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-03-18T12:22:29.437026Z","caller":"traceutil/trace.go:171","msg":"trace[1890243475] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:652; }","duration":"137.6921ms","start":"2024-03-18T12:22:29.299301Z","end":"2024-03-18T12:22:29.436993Z","steps":["trace[1890243475] 'range keys from in-memory index tree'  (duration: 137.405899ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T12:22:29.875888Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"230.407166ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-03-18T12:22:29.876037Z","caller":"traceutil/trace.go:171","msg":"trace[1403270444] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; response_count:0; response_revision:653; }","duration":"230.558867ms","start":"2024-03-18T12:22:29.645445Z","end":"2024-03-18T12:22:29.876004Z","steps":["trace[1403270444] 'count revisions from in-memory index tree'  (duration: 230.258066ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T12:22:29.87632Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"271.597597ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/\" range_end:\"/registry/serviceaccounts0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-03-18T12:22:29.876418Z","caller":"traceutil/trace.go:171","msg":"trace[1075745227] range","detail":"{range_begin:/registry/serviceaccounts/; range_end:/registry/serviceaccounts0; response_count:0; response_revision:653; }","duration":"271.706197ms","start":"2024-03-18T12:22:29.604701Z","end":"2024-03-18T12:22:29.876407Z","steps":["trace[1075745227] 'count revisions from in-memory index tree'  (duration: 271.516997ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T12:22:29.876912Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"245.599277ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-642600-m02\" ","response":"range_response_count:1 size:3149"}
	{"level":"info","ts":"2024-03-18T12:22:29.877702Z","caller":"traceutil/trace.go:171","msg":"trace[1271515036] range","detail":"{range_begin:/registry/minions/multinode-642600-m02; range_end:; response_count:1; response_revision:653; }","duration":"246.388678ms","start":"2024-03-18T12:22:29.631302Z","end":"2024-03-18T12:22:29.877691Z","steps":["trace[1271515036] 'range keys from in-memory index tree'  (duration: 245.442977ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:23:55 up 7 min,  0 users,  load average: 0.47, 0.49, 0.25
	Linux multinode-642600 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [5cf42651cb21] <==
	I0318 12:22:52.760666       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:23:02.767466       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:23:02.767515       1 main.go:227] handling current node
	I0318 12:23:02.767669       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:23:02.767684       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:23:12.783717       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:23:12.783822       1 main.go:227] handling current node
	I0318 12:23:12.783837       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:23:12.783845       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:23:22.796316       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:23:22.796431       1 main.go:227] handling current node
	I0318 12:23:22.796449       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:23:22.796458       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:23:32.804605       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:23:32.805508       1 main.go:227] handling current node
	I0318 12:23:32.805613       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:23:32.805718       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:23:42.813729       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:23:42.813830       1 main.go:227] handling current node
	I0318 12:23:42.814193       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:23:42.814322       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:23:52.824217       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:23:52.824327       1 main.go:227] handling current node
	I0318 12:23:52.824343       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:23:52.824352       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [4b94d396876e] <==
	I0318 12:18:46.611847       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0318 12:18:46.616652       1 controller.go:624] quota admission added evaluator for: namespaces
	I0318 12:18:46.619040       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0318 12:18:46.619097       1 aggregator.go:166] initial CRD sync complete...
	I0318 12:18:46.619105       1 autoregister_controller.go:141] Starting autoregister controller
	I0318 12:18:46.619110       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0318 12:18:46.619116       1 cache.go:39] Caches are synced for autoregister controller
	I0318 12:18:46.661484       1 shared_informer.go:318] Caches are synced for node_authorizer
	E0318 12:18:46.680470       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0318 12:18:46.894199       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0318 12:18:47.421338       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0318 12:18:47.434245       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0318 12:18:47.434264       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0318 12:18:48.796623       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0318 12:18:48.942232       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0318 12:18:49.100514       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0318 12:18:49.115899       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.25.151.112]
	I0318 12:18:49.117464       1 controller.go:624] quota admission added evaluator for: endpoints
	I0318 12:18:49.139006       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0318 12:18:49.593989       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0318 12:18:50.772339       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0318 12:18:50.810106       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0318 12:18:50.827721       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0318 12:19:03.062382       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0318 12:19:03.118031       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [a54be4436901] <==
	I0318 12:19:04.890459       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.083804ms"
	I0318 12:19:04.890764       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.4µs"
	I0318 12:19:15.938090       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="157.9µs"
	I0318 12:19:15.982953       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="121.301µs"
	I0318 12:19:17.607464       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0318 12:19:19.208242       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="102.7µs"
	I0318 12:19:19.274086       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="22.124146ms"
	I0318 12:19:19.275145       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="211.9µs"
	I0318 12:22:12.652722       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600-m02\" does not exist"
	I0318 12:22:12.679760       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-642600-m02" podCIDRs=["10.244.1.0/24"]
	I0318 12:22:12.706735       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-d5llj"
	I0318 12:22:12.706774       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-vts9f"
	I0318 12:22:17.642129       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-642600-m02"
	I0318 12:22:17.642212       1 event.go:307] "Event occurred" object="multinode-642600-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600-m02 event: Registered Node multinode-642600-m02 in Controller"
	I0318 12:22:34.263318       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:23:01.851486       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5b5d89c9d6 to 2"
	I0318 12:23:01.881281       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-hmhdf"
	I0318 12:23:01.924301       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-48qkw"
	I0318 12:23:01.946058       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="91.676064ms"
	I0318 12:23:02.049702       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="103.251772ms"
	I0318 12:23:02.049789       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="35.4µs"
	I0318 12:23:04.783277       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="15.030749ms"
	I0318 12:23:04.783520       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="39.9µs"
	I0318 12:23:05.441638       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="14.350047ms"
	I0318 12:23:05.441876       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="105µs"
	
	
	==> kube-proxy [4bbad08fe59a] <==
	I0318 12:19:04.970720       1 server_others.go:69] "Using iptables proxy"
	I0318 12:19:04.997380       1 node.go:141] Successfully retrieved node IP: 172.25.151.112
	I0318 12:19:05.099028       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 12:19:05.099065       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 12:19:05.102885       1 server_others.go:152] "Using iptables Proxier"
	I0318 12:19:05.103013       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 12:19:05.103652       1 server.go:846] "Version info" version="v1.28.4"
	I0318 12:19:05.103704       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:19:05.105505       1 config.go:188] "Starting service config controller"
	I0318 12:19:05.106093       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 12:19:05.106131       1 config.go:97] "Starting endpoint slice config controller"
	I0318 12:19:05.106138       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 12:19:05.107424       1 config.go:315] "Starting node config controller"
	I0318 12:19:05.107456       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 12:19:05.206699       1 shared_informer.go:318] Caches are synced for service config
	I0318 12:19:05.206811       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 12:19:05.207857       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [47777d4c0b90] <==
	W0318 12:18:47.563772       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0318 12:18:47.563806       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0318 12:18:47.597770       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0318 12:18:47.597873       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0318 12:18:47.684794       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 12:18:47.685008       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 12:18:47.685352       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0318 12:18:47.685509       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0318 12:18:47.840132       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0318 12:18:47.840303       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0318 12:18:47.879838       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0318 12:18:47.880363       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0318 12:18:47.906171       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0318 12:18:47.906493       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0318 12:18:48.059997       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 12:18:48.060049       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0318 12:18:48.096160       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 12:18:48.096304       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 12:18:48.096504       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0318 12:18:48.096662       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 12:18:48.133175       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0318 12:18:48.133469       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0318 12:18:48.135066       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0318 12:18:48.135196       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0318 12:18:50.022459       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 18 12:20:51 multinode-642600 kubelet[2825]: E0318 12:20:51.085720    2825 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 12:20:51 multinode-642600 kubelet[2825]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 12:20:51 multinode-642600 kubelet[2825]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 12:20:51 multinode-642600 kubelet[2825]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 12:20:51 multinode-642600 kubelet[2825]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 12:21:51 multinode-642600 kubelet[2825]: E0318 12:21:51.083757    2825 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 12:21:51 multinode-642600 kubelet[2825]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 12:21:51 multinode-642600 kubelet[2825]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 12:21:51 multinode-642600 kubelet[2825]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 12:21:51 multinode-642600 kubelet[2825]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 12:22:51 multinode-642600 kubelet[2825]: E0318 12:22:51.084971    2825 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 12:22:51 multinode-642600 kubelet[2825]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 12:22:51 multinode-642600 kubelet[2825]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 12:22:51 multinode-642600 kubelet[2825]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 12:22:51 multinode-642600 kubelet[2825]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 12:23:01 multinode-642600 kubelet[2825]: I0318 12:23:01.947800    2825 topology_manager.go:215] "Topology Admit Handler" podUID="45969c0e-ac43-459e-95c0-86f7b76947db" podNamespace="default" podName="busybox-5b5d89c9d6-48qkw"
	Mar 18 12:23:01 multinode-642600 kubelet[2825]: W0318 12:23:01.955394    2825 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:multinode-642600" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'multinode-642600' and this object
	Mar 18 12:23:01 multinode-642600 kubelet[2825]: E0318 12:23:01.955450    2825 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:multinode-642600" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'multinode-642600' and this object
	Mar 18 12:23:02 multinode-642600 kubelet[2825]: I0318 12:23:02.046839    2825 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g8n5\" (UniqueName: \"kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5\") pod \"busybox-5b5d89c9d6-48qkw\" (UID: \"45969c0e-ac43-459e-95c0-86f7b76947db\") " pod="default/busybox-5b5d89c9d6-48qkw"
	Mar 18 12:23:03 multinode-642600 kubelet[2825]: I0318 12:23:03.355991    2825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29bb4d534c2e2b00dfe907d4443637851e3c3348e31bf00939cd6efad71c4e2e"
	Mar 18 12:23:51 multinode-642600 kubelet[2825]: E0318 12:23:51.092608    2825 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 12:23:51 multinode-642600 kubelet[2825]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 12:23:51 multinode-642600 kubelet[2825]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 12:23:51 multinode-642600 kubelet[2825]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 12:23:51 multinode-642600 kubelet[2825]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 12:23:46.472112    8796 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-642600 -n multinode-642600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-642600 -n multinode-642600: (12.7038716s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-642600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (58.75s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (412.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-642600
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-642600
E0318 12:39:49.637811    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-642600: (1m41.7407733s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-642600 --wait=true -v=8 --alsologtostderr
E0318 12:42:21.977326    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
E0318 12:43:45.205599    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
E0318 12:44:49.637002    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-642600 --wait=true -v=8 --alsologtostderr: exit status 1 (4m19.2747381s)

                                                
                                                
-- stdout --
	* [multinode-642600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18431
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "multinode-642600" primary control-plane node in "multinode-642600" cluster
	* Restarting existing hyperv VM for "multinode-642600" ...
	* Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	
	* Starting "multinode-642600-m02" worker node in "multinode-642600" cluster
	* Restarting existing hyperv VM for "multinode-642600-m02" ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 12:41:14.092269    5712 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0318 12:41:14.174495    5712 out.go:291] Setting OutFile to fd 1340 ...
	I0318 12:41:14.175337    5712 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:41:14.175337    5712 out.go:304] Setting ErrFile to fd 1412...
	I0318 12:41:14.175337    5712 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:41:14.198574    5712 out.go:298] Setting JSON to false
	I0318 12:41:14.201477    5712 start.go:129] hostinfo: {"hostname":"minikube6","uptime":140998,"bootTime":1710624675,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0318 12:41:14.201477    5712 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 12:41:14.339915    5712 out.go:177] * [multinode-642600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	I0318 12:41:14.345294    5712 notify.go:220] Checking for updates...
	I0318 12:41:14.393369    5712 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0318 12:41:14.597473    5712 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 12:41:14.752675    5712 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0318 12:41:14.939023    5712 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 12:41:15.132692    5712 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 12:41:15.191577    5712 config.go:182] Loaded profile config "multinode-642600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 12:41:15.192298    5712 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 12:41:21.044712    5712 out.go:177] * Using the hyperv driver based on existing profile
	I0318 12:41:21.232738    5712 start.go:297] selected driver: hyperv
	I0318 12:41:21.232837    5712 start.go:901] validating driver "hyperv" against &{Name:multinode-642600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.28.4 ClusterName:multinode-642600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.151.112 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.159.102 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.25.157.200 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress
:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 12:41:21.233162    5712 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 12:41:21.289013    5712 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 12:41:21.289420    5712 cni.go:84] Creating CNI manager for ""
	I0318 12:41:21.289420    5712 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0318 12:41:21.289851    5712 start.go:340] cluster config:
	{Name:multinode-642600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-642600 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.151.112 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.159.102 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.25.157.200 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisio
ner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 12:41:21.290276    5712 iso.go:125] acquiring lock: {Name:mk859ea173f7c19f70b69d7017f4a5a661cd1500 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 12:41:21.434548    5712 out.go:177] * Starting "multinode-642600" primary control-plane node in "multinode-642600" cluster
	I0318 12:41:21.453985    5712 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 12:41:21.455138    5712 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0318 12:41:21.455138    5712 cache.go:56] Caching tarball of preloaded images
	I0318 12:41:21.455845    5712 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0318 12:41:21.455918    5712 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 12:41:21.456298    5712 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\config.json ...
	I0318 12:41:21.460836    5712 start.go:360] acquireMachinesLock for multinode-642600: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 12:41:21.461043    5712 start.go:364] duration metric: took 106.3µs to acquireMachinesLock for "multinode-642600"
	I0318 12:41:21.461240    5712 start.go:96] Skipping create...Using existing machine configuration
	I0318 12:41:21.461332    5712 fix.go:54] fixHost starting: 
	I0318 12:41:21.461683    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:41:24.316826    5712 main.go:141] libmachine: [stdout =====>] : Off
	
	I0318 12:41:24.317689    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:41:24.317689    5712 fix.go:112] recreateIfNeeded on multinode-642600: state=Stopped err=<nil>
	W0318 12:41:24.317689    5712 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 12:41:24.342056    5712 out.go:177] * Restarting existing hyperv VM for "multinode-642600" ...
	I0318 12:41:24.530542    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-642600
	I0318 12:41:27.730125    5712 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:41:27.730125    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:41:27.730125    5712 main.go:141] libmachine: Waiting for host to start...
	I0318 12:41:27.730125    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:41:30.053094    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:41:30.053154    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:41:30.053154    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:41:32.613670    5712 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:41:32.613670    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:41:33.623916    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:41:35.862239    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:41:35.862239    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:41:35.862239    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:41:38.475717    5712 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:41:38.475717    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:41:39.479064    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:41:41.712232    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:41:41.712666    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:41:41.712666    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:41:44.354805    5712 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:41:44.354805    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:41:45.359948    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:41:47.681725    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:41:47.681784    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:41:47.681784    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:41:50.321516    5712 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:41:50.321516    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:41:51.327615    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:41:53.627594    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:41:53.628519    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:41:53.628694    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:41:56.272362    5712 main.go:141] libmachine: [stdout =====>] : 172.25.148.129
	
	I0318 12:41:56.272362    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:41:56.276226    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:41:58.473377    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:41:58.473377    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:41:58.473817    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:42:01.099563    5712 main.go:141] libmachine: [stdout =====>] : 172.25.148.129
	
	I0318 12:42:01.099563    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:01.100689    5712 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\config.json ...
	I0318 12:42:01.104150    5712 machine.go:94] provisionDockerMachine start ...
	I0318 12:42:01.104150    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:42:03.302275    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:42:03.302275    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:03.302366    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:42:05.967935    5712 main.go:141] libmachine: [stdout =====>] : 172.25.148.129
	
	I0318 12:42:05.967935    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:05.974688    5712 main.go:141] libmachine: Using SSH client type: native
	I0318 12:42:05.975228    5712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.129 22 <nil> <nil>}
	I0318 12:42:05.975319    5712 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 12:42:06.112098    5712 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 12:42:06.112176    5712 buildroot.go:166] provisioning hostname "multinode-642600"
	I0318 12:42:06.112306    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:42:08.281483    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:42:08.281701    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:08.281701    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:42:10.900781    5712 main.go:141] libmachine: [stdout =====>] : 172.25.148.129
	
	I0318 12:42:10.900781    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:10.906449    5712 main.go:141] libmachine: Using SSH client type: native
	I0318 12:42:10.906591    5712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.129 22 <nil> <nil>}
	I0318 12:42:10.906591    5712 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-642600 && echo "multinode-642600" | sudo tee /etc/hostname
	I0318 12:42:11.066386    5712 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-642600
	
	I0318 12:42:11.066386    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:42:13.270565    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:42:13.270565    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:13.270565    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:42:15.963428    5712 main.go:141] libmachine: [stdout =====>] : 172.25.148.129
	
	I0318 12:42:15.963428    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:15.970100    5712 main.go:141] libmachine: Using SSH client type: native
	I0318 12:42:15.970699    5712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.129 22 <nil> <nil>}
	I0318 12:42:15.970699    5712 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-642600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-642600/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-642600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 12:42:16.124542    5712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 12:42:16.124542    5712 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0318 12:42:16.124542    5712 buildroot.go:174] setting up certificates
	I0318 12:42:16.124542    5712 provision.go:84] configureAuth start
	I0318 12:42:16.124542    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:42:18.322060    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:42:18.322060    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:18.322462    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:42:21.007290    5712 main.go:141] libmachine: [stdout =====>] : 172.25.148.129
	
	I0318 12:42:21.007881    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:21.007881    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:42:23.257503    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:42:23.257503    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:23.257670    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:42:25.902074    5712 main.go:141] libmachine: [stdout =====>] : 172.25.148.129
	
	I0318 12:42:25.902281    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:25.902281    5712 provision.go:143] copyHostCerts
	I0318 12:42:25.902463    5712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0318 12:42:25.902463    5712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0318 12:42:25.902463    5712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0318 12:42:25.903350    5712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0318 12:42:25.904398    5712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0318 12:42:25.904819    5712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0318 12:42:25.904819    5712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0318 12:42:25.904819    5712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0318 12:42:25.906175    5712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0318 12:42:25.906433    5712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0318 12:42:25.906433    5712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0318 12:42:25.906433    5712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0318 12:42:25.907646    5712 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-642600 san=[127.0.0.1 172.25.148.129 localhost minikube multinode-642600]
	I0318 12:42:26.286423    5712 provision.go:177] copyRemoteCerts
	I0318 12:42:26.300775    5712 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 12:42:26.300775    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:42:28.522309    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:42:28.522713    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:28.522713    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:42:31.113104    5712 main.go:141] libmachine: [stdout =====>] : 172.25.148.129
	
	I0318 12:42:31.113104    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:31.114054    5712 sshutil.go:53] new ssh client: &{IP:172.25.148.129 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-642600\id_rsa Username:docker}
	I0318 12:42:31.226822    5712 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9260158s)
	I0318 12:42:31.226822    5712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0318 12:42:31.227483    5712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 12:42:31.278696    5712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0318 12:42:31.279683    5712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0318 12:42:31.325681    5712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0318 12:42:31.326161    5712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 12:42:31.372268    5712 provision.go:87] duration metric: took 15.2476311s to configureAuth
	I0318 12:42:31.372410    5712 buildroot.go:189] setting minikube options for container-runtime
	I0318 12:42:31.373444    5712 config.go:182] Loaded profile config "multinode-642600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 12:42:31.373624    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:42:33.576497    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:42:33.576669    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:33.576669    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:42:36.198535    5712 main.go:141] libmachine: [stdout =====>] : 172.25.148.129
	
	I0318 12:42:36.198535    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:36.205452    5712 main.go:141] libmachine: Using SSH client type: native
	I0318 12:42:36.206073    5712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.129 22 <nil> <nil>}
	I0318 12:42:36.206073    5712 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0318 12:42:36.334957    5712 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0318 12:42:36.335033    5712 buildroot.go:70] root file system type: tmpfs
	I0318 12:42:36.335100    5712 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0318 12:42:36.335100    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:42:38.493124    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:42:38.493124    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:38.493124    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:42:41.153303    5712 main.go:141] libmachine: [stdout =====>] : 172.25.148.129
	
	I0318 12:42:41.153303    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:41.162945    5712 main.go:141] libmachine: Using SSH client type: native
	I0318 12:42:41.162945    5712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.129 22 <nil> <nil>}
	I0318 12:42:41.163461    5712 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0318 12:42:41.328264    5712 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0318 12:42:41.328264    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:42:43.465974    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:42:43.465974    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:43.466290    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:42:46.120930    5712 main.go:141] libmachine: [stdout =====>] : 172.25.148.129
	
	I0318 12:42:46.120930    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:46.128155    5712 main.go:141] libmachine: Using SSH client type: native
	I0318 12:42:46.128155    5712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.129 22 <nil> <nil>}
	I0318 12:42:46.128155    5712 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0318 12:42:48.730446    5712 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0318 12:42:48.730596    5712 machine.go:97] duration metric: took 47.6259969s to provisionDockerMachine
	I0318 12:42:48.730596    5712 start.go:293] postStartSetup for "multinode-642600" (driver="hyperv")
	I0318 12:42:48.730596    5712 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 12:42:48.743935    5712 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 12:42:48.743935    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:42:50.958241    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:42:50.958241    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:50.958842    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:42:53.583747    5712 main.go:141] libmachine: [stdout =====>] : 172.25.148.129
	
	I0318 12:42:53.583747    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:53.584310    5712 sshutil.go:53] new ssh client: &{IP:172.25.148.129 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-642600\id_rsa Username:docker}
	I0318 12:42:53.693091    5712 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9490814s)
	I0318 12:42:53.705692    5712 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 12:42:53.716868    5712 command_runner.go:130] > NAME=Buildroot
	I0318 12:42:53.716868    5712 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0318 12:42:53.716868    5712 command_runner.go:130] > ID=buildroot
	I0318 12:42:53.716868    5712 command_runner.go:130] > VERSION_ID=2023.02.9
	I0318 12:42:53.716868    5712 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0318 12:42:53.716868    5712 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 12:42:53.716868    5712 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0318 12:42:53.717834    5712 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0318 12:42:53.718936    5712 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem -> 91202.pem in /etc/ssl/certs
	I0318 12:42:53.718966    5712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem -> /etc/ssl/certs/91202.pem
	I0318 12:42:53.731248    5712 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 12:42:53.749395    5712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem --> /etc/ssl/certs/91202.pem (1708 bytes)
	I0318 12:42:53.800325    5712 start.go:296] duration metric: took 5.069697s for postStartSetup
	I0318 12:42:53.800480    5712 fix.go:56] duration metric: took 1m32.3386007s for fixHost
	I0318 12:42:53.800549    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:42:55.980602    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:42:55.980808    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:55.980862    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:42:58.629512    5712 main.go:141] libmachine: [stdout =====>] : 172.25.148.129
	
	I0318 12:42:58.629512    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:58.637365    5712 main.go:141] libmachine: Using SSH client type: native
	I0318 12:42:58.638015    5712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.129 22 <nil> <nil>}
	I0318 12:42:58.638048    5712 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0318 12:42:58.768996    5712 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710765778.766537739
	
	I0318 12:42:58.769057    5712 fix.go:216] guest clock: 1710765778.766537739
	I0318 12:42:58.769057    5712 fix.go:229] Guest: 2024-03-18 12:42:58.766537739 +0000 UTC Remote: 2024-03-18 12:42:53.8004808 +0000 UTC m=+99.805653901 (delta=4.966056939s)
	I0318 12:42:58.769191    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:43:00.953898    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:43:00.953898    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:43:00.953898    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:43:03.562528    5712 main.go:141] libmachine: [stdout =====>] : 172.25.148.129
	
	I0318 12:43:03.562528    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:43:03.569787    5712 main.go:141] libmachine: Using SSH client type: native
	I0318 12:43:03.570112    5712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.129 22 <nil> <nil>}
	I0318 12:43:03.570112    5712 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1710765778
	I0318 12:43:03.709754    5712 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Mar 18 12:42:58 UTC 2024
	
	I0318 12:43:03.709754    5712 fix.go:236] clock set: Mon Mar 18 12:42:58 UTC 2024
	 (err=<nil>)
	I0318 12:43:03.709754    5712 start.go:83] releasing machines lock for "multinode-642600", held for 1m42.2480452s
	I0318 12:43:03.710972    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:43:05.896205    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:43:05.896505    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:43:05.896608    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:43:08.523466    5712 main.go:141] libmachine: [stdout =====>] : 172.25.148.129
	
	I0318 12:43:08.523466    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:43:08.527778    5712 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 12:43:08.527778    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:43:08.538570    5712 ssh_runner.go:195] Run: cat /version.json
	I0318 12:43:08.538570    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:43:10.809161    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:43:10.809161    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:43:10.809534    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:43:10.809534    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:43:10.809706    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:43:10.809816    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:43:13.549579    5712 main.go:141] libmachine: [stdout =====>] : 172.25.148.129
	
	I0318 12:43:13.549579    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:43:13.549692    5712 sshutil.go:53] new ssh client: &{IP:172.25.148.129 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-642600\id_rsa Username:docker}
	I0318 12:43:13.576311    5712 main.go:141] libmachine: [stdout =====>] : 172.25.148.129
	
	I0318 12:43:13.576311    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:43:13.576784    5712 sshutil.go:53] new ssh client: &{IP:172.25.148.129 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-642600\id_rsa Username:docker}
	I0318 12:43:13.767080    5712 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0318 12:43:13.767179    5712 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2393684s)
	I0318 12:43:13.767179    5712 command_runner.go:130] > {"iso_version": "v1.32.1-1710520390-17991", "kicbase_version": "v0.0.42-1710284843-18375", "minikube_version": "v1.32.0", "commit": "3dd306d082737a9ddf335108b42c9fcb2ad84298"}
	I0318 12:43:13.767179    5712 ssh_runner.go:235] Completed: cat /version.json: (5.2285766s)
	I0318 12:43:13.780439    5712 ssh_runner.go:195] Run: systemctl --version
	I0318 12:43:13.791580    5712 command_runner.go:130] > systemd 252 (252)
	I0318 12:43:13.791728    5712 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0318 12:43:13.803493    5712 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0318 12:43:13.812744    5712 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0318 12:43:13.813325    5712 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 12:43:13.826084    5712 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 12:43:13.855578    5712 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0318 12:43:13.856454    5712 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 12:43:13.856577    5712 start.go:494] detecting cgroup driver to use...
	I0318 12:43:13.857036    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 12:43:13.897705    5712 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0318 12:43:13.910741    5712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0318 12:43:13.946052    5712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0318 12:43:13.966966    5712 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0318 12:43:13.979953    5712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0318 12:43:14.012337    5712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 12:43:14.044571    5712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0318 12:43:14.078052    5712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 12:43:14.113260    5712 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 12:43:14.145704    5712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0318 12:43:14.181986    5712 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 12:43:14.200399    5712 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0318 12:43:14.212857    5712 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 12:43:14.248658    5712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:43:14.454652    5712 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0318 12:43:14.487097    5712 start.go:494] detecting cgroup driver to use...
	I0318 12:43:14.499933    5712 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0318 12:43:14.522002    5712 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0318 12:43:14.522061    5712 command_runner.go:130] > [Unit]
	I0318 12:43:14.522123    5712 command_runner.go:130] > Description=Docker Application Container Engine
	I0318 12:43:14.522123    5712 command_runner.go:130] > Documentation=https://docs.docker.com
	I0318 12:43:14.522123    5712 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0318 12:43:14.522123    5712 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0318 12:43:14.522123    5712 command_runner.go:130] > StartLimitBurst=3
	I0318 12:43:14.522123    5712 command_runner.go:130] > StartLimitIntervalSec=60
	I0318 12:43:14.522123    5712 command_runner.go:130] > [Service]
	I0318 12:43:14.522123    5712 command_runner.go:130] > Type=notify
	I0318 12:43:14.522123    5712 command_runner.go:130] > Restart=on-failure
	I0318 12:43:14.522123    5712 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0318 12:43:14.522123    5712 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0318 12:43:14.522123    5712 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0318 12:43:14.522123    5712 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0318 12:43:14.522123    5712 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0318 12:43:14.522123    5712 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0318 12:43:14.522123    5712 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0318 12:43:14.522123    5712 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0318 12:43:14.522123    5712 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0318 12:43:14.522123    5712 command_runner.go:130] > ExecStart=
	I0318 12:43:14.522123    5712 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0318 12:43:14.522123    5712 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0318 12:43:14.522123    5712 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0318 12:43:14.522123    5712 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0318 12:43:14.522123    5712 command_runner.go:130] > LimitNOFILE=infinity
	I0318 12:43:14.522123    5712 command_runner.go:130] > LimitNPROC=infinity
	I0318 12:43:14.522123    5712 command_runner.go:130] > LimitCORE=infinity
	I0318 12:43:14.522123    5712 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0318 12:43:14.522123    5712 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0318 12:43:14.522123    5712 command_runner.go:130] > TasksMax=infinity
	I0318 12:43:14.522123    5712 command_runner.go:130] > TimeoutStartSec=0
	I0318 12:43:14.522123    5712 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0318 12:43:14.522123    5712 command_runner.go:130] > Delegate=yes
	I0318 12:43:14.522123    5712 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0318 12:43:14.522123    5712 command_runner.go:130] > KillMode=process
	I0318 12:43:14.522123    5712 command_runner.go:130] > [Install]
	I0318 12:43:14.522123    5712 command_runner.go:130] > WantedBy=multi-user.target
	I0318 12:43:14.532000    5712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 12:43:14.565326    5712 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 12:43:14.611474    5712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 12:43:14.648709    5712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 12:43:14.684182    5712 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0318 12:43:14.750586    5712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 12:43:14.779613    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 12:43:14.826532    5712 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0318 12:43:14.837540    5712 ssh_runner.go:195] Run: which cri-dockerd
	I0318 12:43:14.844394    5712 command_runner.go:130] > /usr/bin/cri-dockerd
	I0318 12:43:14.856947    5712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0318 12:43:14.876240    5712 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0318 12:43:14.926777    5712 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0318 12:43:15.142802    5712 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0318 12:43:15.335751    5712 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0318 12:43:15.335922    5712 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0318 12:43:15.385443    5712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:43:15.603865    5712 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 12:43:18.273925    5712 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6700435s)
	I0318 12:43:18.286887    5712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0318 12:43:18.325011    5712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 12:43:18.362806    5712 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0318 12:43:18.582602    5712 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0318 12:43:18.798246    5712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:43:19.015375    5712 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0318 12:43:19.059889    5712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 12:43:19.095906    5712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:43:19.318696    5712 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0318 12:43:19.432283    5712 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0318 12:43:19.444288    5712 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0318 12:43:19.453286    5712 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0318 12:43:19.453286    5712 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0318 12:43:19.453286    5712 command_runner.go:130] > Device: 0,22	Inode: 849         Links: 1
	I0318 12:43:19.453286    5712 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0318 12:43:19.453286    5712 command_runner.go:130] > Access: 2024-03-18 12:43:19.343967496 +0000
	I0318 12:43:19.453286    5712 command_runner.go:130] > Modify: 2024-03-18 12:43:19.343967496 +0000
	I0318 12:43:19.453286    5712 command_runner.go:130] > Change: 2024-03-18 12:43:19.346967492 +0000
	I0318 12:43:19.453286    5712 command_runner.go:130] >  Birth: -
	I0318 12:43:19.453286    5712 start.go:562] Will wait 60s for crictl version
	I0318 12:43:19.465319    5712 ssh_runner.go:195] Run: which crictl
	I0318 12:43:19.471975    5712 command_runner.go:130] > /usr/bin/crictl
	I0318 12:43:19.485121    5712 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 12:43:19.572755    5712 command_runner.go:130] > Version:  0.1.0
	I0318 12:43:19.572843    5712 command_runner.go:130] > RuntimeName:  docker
	I0318 12:43:19.572843    5712 command_runner.go:130] > RuntimeVersion:  25.0.4
	I0318 12:43:19.572843    5712 command_runner.go:130] > RuntimeApiVersion:  v1
	I0318 12:43:19.572967    5712 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.4
	RuntimeApiVersion:  v1
	I0318 12:43:19.582380    5712 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 12:43:19.617922    5712 command_runner.go:130] > 25.0.4
	I0318 12:43:19.627956    5712 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 12:43:19.664705    5712 command_runner.go:130] > 25.0.4
	I0318 12:43:19.667912    5712 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	I0318 12:43:19.667912    5712 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0318 12:43:19.672492    5712 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0318 12:43:19.672492    5712 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0318 12:43:19.672492    5712 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0318 12:43:19.672492    5712 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:ae:0d:2c Flags:up|broadcast|multicast|running}
	I0318 12:43:19.676312    5712 ip.go:210] interface addr: fe80::f8a6:d6b6:cc4:1ba0/64
	I0318 12:43:19.676312    5712 ip.go:210] interface addr: 172.25.144.1/20
	I0318 12:43:19.690287    5712 ssh_runner.go:195] Run: grep 172.25.144.1	host.minikube.internal$ /etc/hosts
	I0318 12:43:19.697806    5712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 12:43:19.721930    5712 kubeadm.go:877] updating cluster {Name:multinode-642600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-642600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.148.129 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.159.102 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.25.157.200 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 12:43:19.721930    5712 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 12:43:19.732280    5712 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 12:43:19.763775    5712 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0318 12:43:19.763775    5712 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0318 12:43:19.763775    5712 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0318 12:43:19.763775    5712 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0318 12:43:19.763775    5712 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0318 12:43:19.763775    5712 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0318 12:43:19.763775    5712 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0318 12:43:19.763775    5712 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0318 12:43:19.763775    5712 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 12:43:19.763941    5712 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0318 12:43:19.763941    5712 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0318 12:43:19.763941    5712 docker.go:615] Images already preloaded, skipping extraction
	I0318 12:43:19.775255    5712 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 12:43:19.807174    5712 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0318 12:43:19.807442    5712 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0318 12:43:19.807442    5712 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0318 12:43:19.807442    5712 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0318 12:43:19.807442    5712 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0318 12:43:19.807442    5712 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0318 12:43:19.807442    5712 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0318 12:43:19.807442    5712 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0318 12:43:19.807563    5712 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 12:43:19.807563    5712 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0318 12:43:19.807641    5712 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0318 12:43:19.807641    5712 cache_images.go:84] Images are preloaded, skipping loading
	I0318 12:43:19.807641    5712 kubeadm.go:928] updating node { 172.25.148.129 8443 v1.28.4 docker true true} ...
	I0318 12:43:19.807925    5712 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-642600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.148.129
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-642600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 12:43:19.817637    5712 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0318 12:43:19.855673    5712 command_runner.go:130] > cgroupfs
	I0318 12:43:19.855946    5712 cni.go:84] Creating CNI manager for ""
	I0318 12:43:19.855946    5712 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0318 12:43:19.855946    5712 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 12:43:19.855946    5712 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.25.148.129 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-642600 NodeName:multinode-642600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.25.148.129"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.25.148.129 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 12:43:19.855946    5712 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.25.148.129
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-642600"
	  kubeletExtraArgs:
	    node-ip: 172.25.148.129
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.25.148.129"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 12:43:19.869595    5712 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 12:43:19.890835    5712 command_runner.go:130] > kubeadm
	I0318 12:43:19.890894    5712 command_runner.go:130] > kubectl
	I0318 12:43:19.890894    5712 command_runner.go:130] > kubelet
	I0318 12:43:19.890894    5712 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 12:43:19.902612    5712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 12:43:19.923839    5712 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0318 12:43:19.955272    5712 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 12:43:19.987507    5712 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0318 12:43:20.030928    5712 ssh_runner.go:195] Run: grep 172.25.148.129	control-plane.minikube.internal$ /etc/hosts
	I0318 12:43:20.037057    5712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.148.129	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 12:43:20.071648    5712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:43:20.280384    5712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 12:43:20.309389    5712 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600 for IP: 172.25.148.129
	I0318 12:43:20.309389    5712 certs.go:194] generating shared ca certs ...
	I0318 12:43:20.309389    5712 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:43:20.310375    5712 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0318 12:43:20.311438    5712 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0318 12:43:20.311746    5712 certs.go:256] generating profile certs ...
	I0318 12:43:20.312055    5712 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\client.key
	I0318 12:43:20.312780    5712 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\apiserver.key.79698273
	I0318 12:43:20.312780    5712 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\apiserver.crt.79698273 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.148.129]
	I0318 12:43:20.558565    5712 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\apiserver.crt.79698273 ...
	I0318 12:43:20.558565    5712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\apiserver.crt.79698273: {Name:mk2238ba7bfd2f6a337bcb117542d06a7c4668e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:43:20.560290    5712 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\apiserver.key.79698273 ...
	I0318 12:43:20.560290    5712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\apiserver.key.79698273: {Name:mk79ccce3f71f4e955e089d2a0d5269242d694a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:43:20.561767    5712 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\apiserver.crt.79698273 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\apiserver.crt
	I0318 12:43:20.576875    5712 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\apiserver.key.79698273 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\apiserver.key
	I0318 12:43:20.578257    5712 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\proxy-client.key
	I0318 12:43:20.578257    5712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 12:43:20.578483    5712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0318 12:43:20.578604    5712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 12:43:20.578740    5712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 12:43:20.578886    5712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 12:43:20.579128    5712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 12:43:20.579230    5712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 12:43:20.579341    5712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 12:43:20.580232    5712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9120.pem (1338 bytes)
	W0318 12:43:20.580611    5712 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9120_empty.pem, impossibly tiny 0 bytes
	I0318 12:43:20.580719    5712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0318 12:43:20.581083    5712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0318 12:43:20.581300    5712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0318 12:43:20.581520    5712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0318 12:43:20.581966    5712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem (1708 bytes)
	I0318 12:43:20.582239    5712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:43:20.582410    5712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9120.pem -> /usr/share/ca-certificates/9120.pem
	I0318 12:43:20.582610    5712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem -> /usr/share/ca-certificates/91202.pem
	I0318 12:43:20.583560    5712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 12:43:20.635212    5712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 12:43:20.683038    5712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 12:43:20.732428    5712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 12:43:20.783809    5712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 12:43:20.834564    5712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 12:43:20.894689    5712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 12:43:20.946175    5712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 12:43:20.997273    5712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 12:43:21.044263    5712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9120.pem --> /usr/share/ca-certificates/9120.pem (1338 bytes)
	I0318 12:43:21.094345    5712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem --> /usr/share/ca-certificates/91202.pem (1708 bytes)
	I0318 12:43:21.143361    5712 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 12:43:21.191133    5712 ssh_runner.go:195] Run: openssl version
	I0318 12:43:21.201045    5712 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0318 12:43:21.215353    5712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/91202.pem && ln -fs /usr/share/ca-certificates/91202.pem /etc/ssl/certs/91202.pem"
	I0318 12:43:21.246370    5712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91202.pem
	I0318 12:43:21.253217    5712 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 18 10:53 /usr/share/ca-certificates/91202.pem
	I0318 12:43:21.253521    5712 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 10:53 /usr/share/ca-certificates/91202.pem
	I0318 12:43:21.266456    5712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91202.pem
	I0318 12:43:21.274981    5712 command_runner.go:130] > 3ec20f2e
	I0318 12:43:21.287389    5712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/91202.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 12:43:21.320914    5712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 12:43:21.354808    5712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:43:21.361560    5712 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 18 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:43:21.361560    5712 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:43:21.372553    5712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:43:21.381560    5712 command_runner.go:130] > b5213941
	I0318 12:43:21.395210    5712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 12:43:21.425910    5712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9120.pem && ln -fs /usr/share/ca-certificates/9120.pem /etc/ssl/certs/9120.pem"
	I0318 12:43:21.459783    5712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9120.pem
	I0318 12:43:21.466197    5712 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 18 10:53 /usr/share/ca-certificates/9120.pem
	I0318 12:43:21.466263    5712 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 10:53 /usr/share/ca-certificates/9120.pem
	I0318 12:43:21.478916    5712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9120.pem
	I0318 12:43:21.490949    5712 command_runner.go:130] > 51391683
	I0318 12:43:21.502945    5712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9120.pem /etc/ssl/certs/51391683.0"
	I0318 12:43:21.538323    5712 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 12:43:21.545300    5712 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 12:43:21.545300    5712 command_runner.go:130] >   Size: 1164      	Blocks: 8          IO Block: 4096   regular file
	I0318 12:43:21.545300    5712 command_runner.go:130] > Device: 8,1	Inode: 7336229     Links: 1
	I0318 12:43:21.545300    5712 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0318 12:43:21.545300    5712 command_runner.go:130] > Access: 2024-03-18 12:18:36.848805156 +0000
	I0318 12:43:21.545300    5712 command_runner.go:130] > Modify: 2024-03-18 12:18:36.848805156 +0000
	I0318 12:43:21.545300    5712 command_runner.go:130] > Change: 2024-03-18 12:18:36.848805156 +0000
	I0318 12:43:21.545300    5712 command_runner.go:130] >  Birth: 2024-03-18 12:18:36.848805156 +0000
	I0318 12:43:21.556310    5712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 12:43:21.565316    5712 command_runner.go:130] > Certificate will not expire
	I0318 12:43:21.578012    5712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 12:43:21.588017    5712 command_runner.go:130] > Certificate will not expire
	I0318 12:43:21.599030    5712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 12:43:21.609102    5712 command_runner.go:130] > Certificate will not expire
	I0318 12:43:21.621678    5712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 12:43:21.630595    5712 command_runner.go:130] > Certificate will not expire
	I0318 12:43:21.643316    5712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 12:43:21.652549    5712 command_runner.go:130] > Certificate will not expire
	I0318 12:43:21.665949    5712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 12:43:21.674949    5712 command_runner.go:130] > Certificate will not expire
	I0318 12:43:21.675302    5712 kubeadm.go:391] StartCluster: {Name:multinode-642600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.2
8.4 ClusterName:multinode-642600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.148.129 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.159.102 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.25.157.200 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress
-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 12:43:21.684960    5712 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 12:43:21.726150    5712 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0318 12:43:21.746132    5712 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0318 12:43:21.746364    5712 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0318 12:43:21.746423    5712 command_runner.go:130] > /var/lib/minikube/etcd:
	I0318 12:43:21.746423    5712 command_runner.go:130] > member
	W0318 12:43:21.746423    5712 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 12:43:21.746423    5712 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 12:43:21.746423    5712 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 12:43:21.758628    5712 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 12:43:21.778770    5712 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 12:43:21.779243    5712 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-642600" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0318 12:43:21.780215    5712 kubeconfig.go:62] C:\Users\jenkins.minikube6\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-642600" cluster setting kubeconfig missing "multinode-642600" context setting]
	I0318 12:43:21.781185    5712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:43:21.795191    5712 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0318 12:43:21.796236    5712 kapi.go:59] client config for multinode-642600: &rest.Config{Host:"https://172.25.148.129:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-642600/client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-642600/client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x226b2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 12:43:21.797199    5712 cert_rotation.go:137] Starting client certificate rotation controller
	I0318 12:43:21.810105    5712 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 12:43:21.828392    5712 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0318 12:43:21.828392    5712 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0318 12:43:21.828392    5712 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0318 12:43:21.828392    5712 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0318 12:43:21.828392    5712 command_runner.go:130] >  kind: InitConfiguration
	I0318 12:43:21.828392    5712 command_runner.go:130] >  localAPIEndpoint:
	I0318 12:43:21.828392    5712 command_runner.go:130] > -  advertiseAddress: 172.25.151.112
	I0318 12:43:21.828392    5712 command_runner.go:130] > +  advertiseAddress: 172.25.148.129
	I0318 12:43:21.828392    5712 command_runner.go:130] >    bindPort: 8443
	I0318 12:43:21.828392    5712 command_runner.go:130] >  bootstrapTokens:
	I0318 12:43:21.828392    5712 command_runner.go:130] >    - groups:
	I0318 12:43:21.828392    5712 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0318 12:43:21.828392    5712 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0318 12:43:21.828392    5712 command_runner.go:130] >    name: "multinode-642600"
	I0318 12:43:21.828392    5712 command_runner.go:130] >    kubeletExtraArgs:
	I0318 12:43:21.828392    5712 command_runner.go:130] > -    node-ip: 172.25.151.112
	I0318 12:43:21.828392    5712 command_runner.go:130] > +    node-ip: 172.25.148.129
	I0318 12:43:21.828392    5712 command_runner.go:130] >    taints: []
	I0318 12:43:21.828392    5712 command_runner.go:130] >  ---
	I0318 12:43:21.828392    5712 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0318 12:43:21.828392    5712 command_runner.go:130] >  kind: ClusterConfiguration
	I0318 12:43:21.828392    5712 command_runner.go:130] >  apiServer:
	I0318 12:43:21.828392    5712 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.25.151.112"]
	I0318 12:43:21.828392    5712 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.25.148.129"]
	I0318 12:43:21.828392    5712 command_runner.go:130] >    extraArgs:
	I0318 12:43:21.828392    5712 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0318 12:43:21.828392    5712 command_runner.go:130] >  controllerManager:
	I0318 12:43:21.828392    5712 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.25.151.112
	+  advertiseAddress: 172.25.148.129
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-642600"
	   kubeletExtraArgs:
	-    node-ip: 172.25.151.112
	+    node-ip: 172.25.148.129
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.25.151.112"]
	+  certSANs: ["127.0.0.1", "localhost", "172.25.148.129"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0318 12:43:21.828392    5712 kubeadm.go:1154] stopping kube-system containers ...
	I0318 12:43:21.838912    5712 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 12:43:21.871904    5712 command_runner.go:130] > e81f1d2fdb36
	I0318 12:43:21.872059    5712 command_runner.go:130] > ed38da653fbe
	I0318 12:43:21.872059    5712 command_runner.go:130] > 996fb0f2ade6
	I0318 12:43:21.872094    5712 command_runner.go:130] > 3a9b4c05a5cc
	I0318 12:43:21.872094    5712 command_runner.go:130] > 5cf42651cb21
	I0318 12:43:21.872094    5712 command_runner.go:130] > 4bbad08fe59a
	I0318 12:43:21.872094    5712 command_runner.go:130] > 2f4709a3a45a
	I0318 12:43:21.872094    5712 command_runner.go:130] > fef37141be6d
	I0318 12:43:21.872094    5712 command_runner.go:130] > 301c80f8b38c
	I0318 12:43:21.872094    5712 command_runner.go:130] > a54be4436901
	I0318 12:43:21.872094    5712 command_runner.go:130] > 47777d4c0b90
	I0318 12:43:21.872094    5712 command_runner.go:130] > 4b94d396876e
	I0318 12:43:21.872094    5712 command_runner.go:130] > f100b1062a56
	I0318 12:43:21.872094    5712 command_runner.go:130] > aad98ae0cd7c
	I0318 12:43:21.872094    5712 command_runner.go:130] > 3500a9f1ca84
	I0318 12:43:21.872094    5712 command_runner.go:130] > d766c4514f0b
	I0318 12:43:21.873080    5712 docker.go:483] Stopping containers: [e81f1d2fdb36 ed38da653fbe 996fb0f2ade6 3a9b4c05a5cc 5cf42651cb21 4bbad08fe59a 2f4709a3a45a fef37141be6d 301c80f8b38c a54be4436901 47777d4c0b90 4b94d396876e f100b1062a56 aad98ae0cd7c 3500a9f1ca84 d766c4514f0b]
	I0318 12:43:21.882579    5712 ssh_runner.go:195] Run: docker stop e81f1d2fdb36 ed38da653fbe 996fb0f2ade6 3a9b4c05a5cc 5cf42651cb21 4bbad08fe59a 2f4709a3a45a fef37141be6d 301c80f8b38c a54be4436901 47777d4c0b90 4b94d396876e f100b1062a56 aad98ae0cd7c 3500a9f1ca84 d766c4514f0b
	I0318 12:43:21.907375    5712 command_runner.go:130] > e81f1d2fdb36
	I0318 12:43:21.907375    5712 command_runner.go:130] > ed38da653fbe
	I0318 12:43:21.907375    5712 command_runner.go:130] > 996fb0f2ade6
	I0318 12:43:21.907375    5712 command_runner.go:130] > 3a9b4c05a5cc
	I0318 12:43:21.907375    5712 command_runner.go:130] > 5cf42651cb21
	I0318 12:43:21.907375    5712 command_runner.go:130] > 4bbad08fe59a
	I0318 12:43:21.907375    5712 command_runner.go:130] > 2f4709a3a45a
	I0318 12:43:21.907375    5712 command_runner.go:130] > fef37141be6d
	I0318 12:43:21.908281    5712 command_runner.go:130] > 301c80f8b38c
	I0318 12:43:21.908281    5712 command_runner.go:130] > a54be4436901
	I0318 12:43:21.908281    5712 command_runner.go:130] > 47777d4c0b90
	I0318 12:43:21.908281    5712 command_runner.go:130] > 4b94d396876e
	I0318 12:43:21.908281    5712 command_runner.go:130] > f100b1062a56
	I0318 12:43:21.908281    5712 command_runner.go:130] > aad98ae0cd7c
	I0318 12:43:21.908281    5712 command_runner.go:130] > 3500a9f1ca84
	I0318 12:43:21.908281    5712 command_runner.go:130] > d766c4514f0b
	I0318 12:43:21.922781    5712 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 12:43:21.964269    5712 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 12:43:21.982306    5712 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0318 12:43:21.982306    5712 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0318 12:43:21.982600    5712 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0318 12:43:21.982600    5712 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 12:43:21.982686    5712 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 12:43:21.982686    5712 kubeadm.go:156] found existing configuration files:
	
	I0318 12:43:21.995588    5712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 12:43:22.012574    5712 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 12:43:22.013798    5712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 12:43:22.025175    5712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 12:43:22.056252    5712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 12:43:22.073646    5712 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 12:43:22.076433    5712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 12:43:22.089597    5712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 12:43:22.121143    5712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 12:43:22.141200    5712 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 12:43:22.141411    5712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 12:43:22.154373    5712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 12:43:22.188829    5712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 12:43:22.211875    5712 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 12:43:22.212870    5712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 12:43:22.223862    5712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 12:43:22.253860    5712 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 12:43:22.271866    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 12:43:22.688838    5712 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 12:43:22.688921    5712 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0318 12:43:22.688921    5712 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0318 12:43:22.688921    5712 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 12:43:22.688921    5712 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0318 12:43:22.688921    5712 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0318 12:43:22.688921    5712 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0318 12:43:22.688921    5712 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0318 12:43:22.688921    5712 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0318 12:43:22.688921    5712 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 12:43:22.688921    5712 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 12:43:22.688921    5712 command_runner.go:130] > [certs] Using the existing "sa" key
	I0318 12:43:22.688921    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 12:43:23.659422    5712 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 12:43:23.659465    5712 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 12:43:23.659465    5712 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 12:43:23.659465    5712 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 12:43:23.659582    5712 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 12:43:23.659582    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 12:43:23.990576    5712 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 12:43:23.991551    5712 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 12:43:23.991551    5712 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0318 12:43:23.991551    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 12:43:24.092646    5712 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 12:43:24.092646    5712 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 12:43:24.092646    5712 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 12:43:24.092646    5712 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 12:43:24.092646    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 12:43:24.199590    5712 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 12:43:24.199741    5712 api_server.go:52] waiting for apiserver process to appear ...
	I0318 12:43:24.213821    5712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 12:43:24.719364    5712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 12:43:25.225456    5712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 12:43:25.718935    5712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 12:43:26.225662    5712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 12:43:26.274272    5712 command_runner.go:130] > 1997
	I0318 12:43:26.274465    5712 api_server.go:72] duration metric: took 2.0745976s to wait for apiserver process to appear ...
	I0318 12:43:26.274465    5712 api_server.go:88] waiting for apiserver healthz status ...
	I0318 12:43:26.274465    5712 api_server.go:253] Checking apiserver healthz at https://172.25.148.129:8443/healthz ...
	I0318 12:43:30.431160    5712 api_server.go:279] https://172.25.148.129:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 12:43:30.431258    5712 api_server.go:103] status: https://172.25.148.129:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 12:43:30.431258    5712 api_server.go:253] Checking apiserver healthz at https://172.25.148.129:8443/healthz ...
	I0318 12:43:30.509034    5712 api_server.go:279] https://172.25.148.129:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 12:43:30.509450    5712 api_server.go:103] status: https://172.25.148.129:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 12:43:30.779944    5712 api_server.go:253] Checking apiserver healthz at https://172.25.148.129:8443/healthz ...
	I0318 12:43:30.787142    5712 api_server.go:279] https://172.25.148.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 12:43:30.788009    5712 api_server.go:103] status: https://172.25.148.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 12:43:31.282368    5712 api_server.go:253] Checking apiserver healthz at https://172.25.148.129:8443/healthz ...
	I0318 12:43:31.294565    5712 api_server.go:279] https://172.25.148.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 12:43:31.294607    5712 api_server.go:103] status: https://172.25.148.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 12:43:31.775192    5712 api_server.go:253] Checking apiserver healthz at https://172.25.148.129:8443/healthz ...
	I0318 12:43:31.784000    5712 api_server.go:279] https://172.25.148.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 12:43:31.784571    5712 api_server.go:103] status: https://172.25.148.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 12:43:32.282336    5712 api_server.go:253] Checking apiserver healthz at https://172.25.148.129:8443/healthz ...
	I0318 12:43:32.292921    5712 api_server.go:279] https://172.25.148.129:8443/healthz returned 200:
	ok
	I0318 12:43:32.292921    5712 round_trippers.go:463] GET https://172.25.148.129:8443/version
	I0318 12:43:32.292921    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:32.292921    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:32.292921    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:32.307425    5712 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0318 12:43:32.308415    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:32.308454    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:32 GMT
	I0318 12:43:32.308454    5712 round_trippers.go:580]     Audit-Id: 43aceb14-36a6-4e46-b05e-76fe75a5153b
	I0318 12:43:32.308454    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:32.308454    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:32.308454    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:32.308454    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:32.308454    5712 round_trippers.go:580]     Content-Length: 264
	I0318 12:43:32.308454    5712 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0318 12:43:32.308454    5712 api_server.go:141] control plane version: v1.28.4
	I0318 12:43:32.308454    5712 api_server.go:131] duration metric: took 6.0339513s to wait for apiserver health ...
	I0318 12:43:32.308454    5712 cni.go:84] Creating CNI manager for ""
	I0318 12:43:32.308454    5712 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0318 12:43:32.312940    5712 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0318 12:43:32.329162    5712 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0318 12:43:32.338044    5712 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0318 12:43:32.338148    5712 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0318 12:43:32.338148    5712 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0318 12:43:32.338148    5712 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0318 12:43:32.338148    5712 command_runner.go:130] > Access: 2024-03-18 12:41:53.487737500 +0000
	I0318 12:43:32.338148    5712 command_runner.go:130] > Modify: 2024-03-15 22:00:10.000000000 +0000
	I0318 12:43:32.338148    5712 command_runner.go:130] > Change: 2024-03-18 12:41:44.149000000 +0000
	I0318 12:43:32.338262    5712 command_runner.go:130] >  Birth: -
	I0318 12:43:32.338262    5712 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0318 12:43:32.338262    5712 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0318 12:43:32.417885    5712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0318 12:43:34.168750    5712 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0318 12:43:34.168750    5712 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0318 12:43:34.168750    5712 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0318 12:43:34.168750    5712 command_runner.go:130] > daemonset.apps/kindnet configured
	I0318 12:43:34.168883    5712 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.7509869s)
	I0318 12:43:34.168883    5712 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 12:43:34.168883    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods
	I0318 12:43:34.168883    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:34.168883    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:34.168883    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:34.176093    5712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:43:34.176093    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:34.176093    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:34.176093    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:34.179465    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:34.179465    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:34 GMT
	I0318 12:43:34.179465    5712 round_trippers.go:580]     Audit-Id: 32f3e80b-ba51-4bef-8025-4405d3d75ffe
	I0318 12:43:34.179465    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:34.180947    5712 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1873"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83637 chars]
	I0318 12:43:34.187800    5712 system_pods.go:59] 12 kube-system pods found
	I0318 12:43:34.187800    5712 system_pods.go:61] "coredns-5dd5756b68-fgn7v" [7bc52797-b4bd-4046-b3d5-fae9c8ccd13b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 12:43:34.187800    5712 system_pods.go:61] "etcd-multinode-642600" [6f0ca14e-af4b-4442-8a48-28b69c699976] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 12:43:34.187800    5712 system_pods.go:61] "kindnet-d5llj" [caa4170d-6120-414a-950c-92a0380a70b8] Running
	I0318 12:43:34.187800    5712 system_pods.go:61] "kindnet-kpt4f" [acd9d7a0-0e27-4bbb-8562-6fbf374742ca] Running
	I0318 12:43:34.187800    5712 system_pods.go:61] "kindnet-thkjp" [a7e20c36-c1d1-4146-a66c-40448e1ae0e5] Running
	I0318 12:43:34.187800    5712 system_pods.go:61] "kube-apiserver-multinode-642600" [ab8e6b8b-cbac-4c90-8f57-9af2760ced9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 12:43:34.187800    5712 system_pods.go:61] "kube-controller-manager-multinode-642600" [1dd2a576-c5a0-44e5-b194-545e8b18962c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 12:43:34.187800    5712 system_pods.go:61] "kube-proxy-4dg79" [449242c2-ad12-4da5-b339-3be7ab8a9b16] Running
	I0318 12:43:34.187800    5712 system_pods.go:61] "kube-proxy-khbjt" [594efa46-7e30-40e6-92dd-9c9c80bc787a] Running
	I0318 12:43:34.187800    5712 system_pods.go:61] "kube-proxy-vts9f" [9545be8f-07fd-49dd-99bd-e9e976e65e7b] Running
	I0318 12:43:34.187800    5712 system_pods.go:61] "kube-scheduler-multinode-642600" [52e29d3b-d6e9-4109-916d-74123a2ab190] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 12:43:34.187800    5712 system_pods.go:61] "storage-provisioner" [d2718b8a-26a9-4c86-bf9a-221d1ee23ceb] Running
	I0318 12:43:34.187800    5712 system_pods.go:74] duration metric: took 18.9164ms to wait for pod list to return data ...
	I0318 12:43:34.187800    5712 node_conditions.go:102] verifying NodePressure condition ...
	I0318 12:43:34.188666    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes
	I0318 12:43:34.188666    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:34.188666    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:34.188666    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:34.193555    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:43:34.193601    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:34.193601    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:34.193601    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:34 GMT
	I0318 12:43:34.193601    5712 round_trippers.go:580]     Audit-Id: 86bd5076-fa9c-49aa-bbeb-584069174c5d
	I0318 12:43:34.193601    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:34.193601    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:34.193601    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:34.193601    5712 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1873"},"items":[{"metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1851","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15630 chars]
	I0318 12:43:34.195206    5712 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 12:43:34.195284    5712 node_conditions.go:123] node cpu capacity is 2
	I0318 12:43:34.195284    5712 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 12:43:34.195284    5712 node_conditions.go:123] node cpu capacity is 2
	I0318 12:43:34.195362    5712 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 12:43:34.195362    5712 node_conditions.go:123] node cpu capacity is 2
	I0318 12:43:34.195362    5712 node_conditions.go:105] duration metric: took 7.5625ms to run NodePressure ...
	I0318 12:43:34.195410    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 12:43:34.595136    5712 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0318 12:43:34.595196    5712 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0318 12:43:34.595540    5712 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 12:43:34.595776    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0318 12:43:34.595776    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:34.595776    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:34.595776    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:34.601199    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:43:34.601199    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:34.601199    5712 round_trippers.go:580]     Audit-Id: e7cb6862-af2e-4c11-a0d4-9872d3b79787
	I0318 12:43:34.601199    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:34.601199    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:34.601199    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:34.601199    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:34.601199    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:34 GMT
	I0318 12:43:34.604156    5712 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1875"},"items":[{"metadata":{"name":"etcd-multinode-642600","namespace":"kube-system","uid":"6f0ca14e-af4b-4442-8a48-28b69c699976","resourceVersion":"1860","creationTimestamp":"2024-03-18T12:43:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.148.129:2379","kubernetes.io/config.hash":"d5f09afee1a6ef36657c1ae3335ddda6","kubernetes.io/config.mirror":"d5f09afee1a6ef36657c1ae3335ddda6","kubernetes.io/config.seen":"2024-03-18T12:43:24.228249982Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:43:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 29377 chars]
	I0318 12:43:34.606092    5712 kubeadm.go:733] kubelet initialised
	I0318 12:43:34.606092    5712 kubeadm.go:734] duration metric: took 10.521ms waiting for restarted kubelet to initialise ...
	I0318 12:43:34.606173    5712 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 12:43:34.606316    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods
	I0318 12:43:34.606362    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:34.606362    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:34.606415    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:34.614356    5712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 12:43:34.614718    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:34.614718    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:34.614718    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:34 GMT
	I0318 12:43:34.614718    5712 round_trippers.go:580]     Audit-Id: 5a59db11-1d14-49b2-9598-cc4448f68a56
	I0318 12:43:34.614718    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:34.614718    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:34.614718    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:34.616859    5712 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1875"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83637 chars]
	I0318 12:43:34.621279    5712 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-fgn7v" in "kube-system" namespace to be "Ready" ...
	I0318 12:43:34.621279    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:43:34.621279    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:34.621279    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:34.621279    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:34.625415    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:34.625415    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:34.626307    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:34.626307    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:34.626307    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:34.626307    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:34 GMT
	I0318 12:43:34.626362    5712 round_trippers.go:580]     Audit-Id: 32630a99-0a82-4ba2-961e-18364b55e578
	I0318 12:43:34.626362    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:34.626501    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:43:34.626792    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:34.626792    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:34.626792    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:34.626792    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:34.631258    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:34.631317    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:34.631317    5712 round_trippers.go:580]     Audit-Id: 6a36dae6-d08a-4c3d-b097-6b4f469a7a34
	I0318 12:43:34.631317    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:34.631317    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:34.631317    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:34.631317    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:34.631317    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:34 GMT
	I0318 12:43:34.631715    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1851","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 12:43:34.632096    5712 pod_ready.go:97] node "multinode-642600" hosting pod "coredns-5dd5756b68-fgn7v" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-642600" has status "Ready":"False"
	I0318 12:43:34.632096    5712 pod_ready.go:81] duration metric: took 10.8168ms for pod "coredns-5dd5756b68-fgn7v" in "kube-system" namespace to be "Ready" ...
	E0318 12:43:34.632096    5712 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-642600" hosting pod "coredns-5dd5756b68-fgn7v" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-642600" has status "Ready":"False"
	I0318 12:43:34.632096    5712 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-642600" in "kube-system" namespace to be "Ready" ...
	I0318 12:43:34.632096    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-642600
	I0318 12:43:34.632096    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:34.632096    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:34.632096    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:34.636843    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:43:34.636843    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:34.636899    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:34.636899    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:34 GMT
	I0318 12:43:34.636899    5712 round_trippers.go:580]     Audit-Id: a6e13cb6-e867-4d6d-b4c3-c8f8002385b3
	I0318 12:43:34.636899    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:34.636899    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:34.636899    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:34.637391    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-642600","namespace":"kube-system","uid":"6f0ca14e-af4b-4442-8a48-28b69c699976","resourceVersion":"1860","creationTimestamp":"2024-03-18T12:43:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.148.129:2379","kubernetes.io/config.hash":"d5f09afee1a6ef36657c1ae3335ddda6","kubernetes.io/config.mirror":"d5f09afee1a6ef36657c1ae3335ddda6","kubernetes.io/config.seen":"2024-03-18T12:43:24.228249982Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:43:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6097 chars]
	I0318 12:43:34.637928    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:34.637992    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:34.637992    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:34.637992    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:34.641708    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:43:34.642009    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:34.642009    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:34.642009    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:34.642009    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:34.642009    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:34 GMT
	I0318 12:43:34.642009    5712 round_trippers.go:580]     Audit-Id: 539f5d1c-5dd2-47d6-bf00-d44564ec3cc9
	I0318 12:43:34.642009    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:34.642352    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1851","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 12:43:34.642733    5712 pod_ready.go:97] node "multinode-642600" hosting pod "etcd-multinode-642600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-642600" has status "Ready":"False"
	I0318 12:43:34.642733    5712 pod_ready.go:81] duration metric: took 10.6374ms for pod "etcd-multinode-642600" in "kube-system" namespace to be "Ready" ...
	E0318 12:43:34.642733    5712 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-642600" hosting pod "etcd-multinode-642600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-642600" has status "Ready":"False"
	I0318 12:43:34.642733    5712 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-642600" in "kube-system" namespace to be "Ready" ...
	I0318 12:43:34.642837    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-642600
	I0318 12:43:34.642921    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:34.642921    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:34.642921    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:34.647401    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:34.647401    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:34.647401    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:34.647401    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:34 GMT
	I0318 12:43:34.647401    5712 round_trippers.go:580]     Audit-Id: 93f0c8ff-95f3-4443-8db5-c3b07d3342ca
	I0318 12:43:34.647401    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:34.647401    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:34.647401    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:34.647401    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-642600","namespace":"kube-system","uid":"ab8e6b8b-cbac-4c90-8f57-9af2760ced9c","resourceVersion":"1861","creationTimestamp":"2024-03-18T12:43:31Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.25.148.129:8443","kubernetes.io/config.hash":"624de65f019baf96d4a0e2fb6064e413","kubernetes.io/config.mirror":"624de65f019baf96d4a0e2fb6064e413","kubernetes.io/config.seen":"2024-03-18T12:43:24.228255882Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:43:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7653 chars]
	I0318 12:43:34.648073    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:34.648073    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:34.648073    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:34.648073    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:34.652650    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:34.652650    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:34.652650    5712 round_trippers.go:580]     Audit-Id: c379ac6e-d821-4bef-a43b-6da103b3f147
	I0318 12:43:34.652650    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:34.652650    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:34.652650    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:34.652650    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:34.652650    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:34 GMT
	I0318 12:43:34.653281    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1851","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 12:43:34.653892    5712 pod_ready.go:97] node "multinode-642600" hosting pod "kube-apiserver-multinode-642600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-642600" has status "Ready":"False"
	I0318 12:43:34.653892    5712 pod_ready.go:81] duration metric: took 11.1586ms for pod "kube-apiserver-multinode-642600" in "kube-system" namespace to be "Ready" ...
	E0318 12:43:34.653892    5712 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-642600" hosting pod "kube-apiserver-multinode-642600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-642600" has status "Ready":"False"
	I0318 12:43:34.653892    5712 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-642600" in "kube-system" namespace to be "Ready" ...
	I0318 12:43:34.653892    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-642600
	I0318 12:43:34.653892    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:34.653892    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:34.653892    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:34.656485    5712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:43:34.657492    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:34.657492    5712 round_trippers.go:580]     Audit-Id: db91fcad-fcf3-49a4-92a3-30bfb50ee2e8
	I0318 12:43:34.657536    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:34.657536    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:34.657536    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:34.657571    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:34.657598    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:34 GMT
	I0318 12:43:34.657598    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-642600","namespace":"kube-system","uid":"1dd2a576-c5a0-44e5-b194-545e8b18962c","resourceVersion":"1855","creationTimestamp":"2024-03-18T12:18:51Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a1608bc774d0b3e96e1b6fbbded5cb52","kubernetes.io/config.mirror":"a1608bc774d0b3e96e1b6fbbded5cb52","kubernetes.io/config.seen":"2024-03-18T12:18:50.896437006Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:18:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7441 chars]
	I0318 12:43:34.658568    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:34.658568    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:34.658633    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:34.658633    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:34.661456    5712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:43:34.661456    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:34.662173    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:34.662173    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:34.662173    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:34.662173    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:34.662173    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:34 GMT
	I0318 12:43:34.662173    5712 round_trippers.go:580]     Audit-Id: 639b0277-1310-4284-ae95-88d321fdf886
	I0318 12:43:34.662173    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1851","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 12:43:34.662815    5712 pod_ready.go:97] node "multinode-642600" hosting pod "kube-controller-manager-multinode-642600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-642600" has status "Ready":"False"
	I0318 12:43:34.662815    5712 pod_ready.go:81] duration metric: took 8.9226ms for pod "kube-controller-manager-multinode-642600" in "kube-system" namespace to be "Ready" ...
	E0318 12:43:34.662815    5712 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-642600" hosting pod "kube-controller-manager-multinode-642600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-642600" has status "Ready":"False"
	I0318 12:43:34.662815    5712 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4dg79" in "kube-system" namespace to be "Ready" ...
	I0318 12:43:34.796545    5712 request.go:629] Waited for 133.579ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4dg79
	I0318 12:43:34.796733    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4dg79
	I0318 12:43:34.796733    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:34.796841    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:34.796841    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:34.800297    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:43:34.800297    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:34.801285    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:34 GMT
	I0318 12:43:34.801285    5712 round_trippers.go:580]     Audit-Id: decc170d-406e-4991-baf1-6d51a48af9dd
	I0318 12:43:34.801332    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:34.801332    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:34.801332    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:34.801332    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:34.801606    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4dg79","generateName":"kube-proxy-","namespace":"kube-system","uid":"449242c2-ad12-4da5-b339-3be7ab8a9b16","resourceVersion":"1871","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"158ddb85-85d3-4864-bdec-d4555b6c7434","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"158ddb85-85d3-4864-bdec-d4555b6c7434\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5743 chars]
	I0318 12:43:34.999201    5712 request.go:629] Waited for 196.6983ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:34.999510    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:34.999510    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:34.999565    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:34.999565    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:35.004063    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:35.004144    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:35.004144    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:35.004144    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:35 GMT
	I0318 12:43:35.004144    5712 round_trippers.go:580]     Audit-Id: 74d36358-d5a0-4814-8c39-2ffb5316860e
	I0318 12:43:35.004223    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:35.004223    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:35.004223    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:35.004443    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1851","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 12:43:35.004713    5712 pod_ready.go:97] node "multinode-642600" hosting pod "kube-proxy-4dg79" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-642600" has status "Ready":"False"
	I0318 12:43:35.004713    5712 pod_ready.go:81] duration metric: took 341.8961ms for pod "kube-proxy-4dg79" in "kube-system" namespace to be "Ready" ...
	E0318 12:43:35.005180    5712 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-642600" hosting pod "kube-proxy-4dg79" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-642600" has status "Ready":"False"
	I0318 12:43:35.005180    5712 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-khbjt" in "kube-system" namespace to be "Ready" ...
	I0318 12:43:35.202176    5712 request.go:629] Waited for 196.8189ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/kube-proxy-khbjt
	I0318 12:43:35.202363    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/kube-proxy-khbjt
	I0318 12:43:35.202363    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:35.202363    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:35.202493    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:35.206378    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:43:35.206787    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:35.206787    5712 round_trippers.go:580]     Audit-Id: 17a37ac7-71ff-4ad2-b99c-8df5e67c24b5
	I0318 12:43:35.206787    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:35.206787    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:35.206787    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:35.206787    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:35.206787    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:35 GMT
	I0318 12:43:35.207490    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-khbjt","generateName":"kube-proxy-","namespace":"kube-system","uid":"594efa46-7e30-40e6-92dd-9c9c80bc787a","resourceVersion":"1825","creationTimestamp":"2024-03-18T12:27:09Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"158ddb85-85d3-4864-bdec-d4555b6c7434","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:27:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"158ddb85-85d3-4864-bdec-d4555b6c7434\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5771 chars]
	I0318 12:43:35.404715    5712 request.go:629] Waited for 196.2004ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.129:8443/api/v1/nodes/multinode-642600-m03
	I0318 12:43:35.404916    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600-m03
	I0318 12:43:35.405086    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:35.405086    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:35.405086    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:35.409419    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:35.409419    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:35.409419    5712 round_trippers.go:580]     Audit-Id: e383b8c1-6ac7-4db0-8b81-3bf40b1567ef
	I0318 12:43:35.409419    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:35.409419    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:35.409419    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:35.409419    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:35.409419    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:35 GMT
	I0318 12:43:35.410078    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m03","uid":"e9bc5257-e8c0-493d-a533-c2a8a832d45e","resourceVersion":"1838","creationTimestamp":"2024-03-18T12:38:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_38_47_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:38:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4400 chars]
	I0318 12:43:35.410556    5712 pod_ready.go:97] node "multinode-642600-m03" hosting pod "kube-proxy-khbjt" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-642600-m03" has status "Ready":"Unknown"
	I0318 12:43:35.410556    5712 pod_ready.go:81] duration metric: took 405.3732ms for pod "kube-proxy-khbjt" in "kube-system" namespace to be "Ready" ...
	E0318 12:43:35.410556    5712 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-642600-m03" hosting pod "kube-proxy-khbjt" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-642600-m03" has status "Ready":"Unknown"
	I0318 12:43:35.410556    5712 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vts9f" in "kube-system" namespace to be "Ready" ...
	I0318 12:43:35.607219    5712 request.go:629] Waited for 196.6621ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vts9f
	I0318 12:43:35.607653    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vts9f
	I0318 12:43:35.607732    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:35.607732    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:35.607769    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:35.612070    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:35.612070    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:35.612070    5712 round_trippers.go:580]     Audit-Id: b4a4b5a0-8bf9-40db-b99c-8d58aaad0f5d
	I0318 12:43:35.612455    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:35.612455    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:35.612455    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:35.612455    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:35.612455    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:35 GMT
	I0318 12:43:35.612544    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vts9f","generateName":"kube-proxy-","namespace":"kube-system","uid":"9545be8f-07fd-49dd-99bd-e9e976e65e7b","resourceVersion":"648","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"158ddb85-85d3-4864-bdec-d4555b6c7434","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"158ddb85-85d3-4864-bdec-d4555b6c7434\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5545 chars]
	I0318 12:43:35.810625    5712 request.go:629] Waited for 197.022ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.129:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:43:35.810625    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:43:35.810892    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:35.810892    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:35.810892    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:35.816058    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:35.816058    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:35.816058    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:35.816058    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:35 GMT
	I0318 12:43:35.816058    5712 round_trippers.go:580]     Audit-Id: 311f0653-c49b-4648-93f2-a9baa5c4aa02
	I0318 12:43:35.816058    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:35.816058    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:35.816058    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:35.816058    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"1670","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3827 chars]
	I0318 12:43:35.816890    5712 pod_ready.go:92] pod "kube-proxy-vts9f" in "kube-system" namespace has status "Ready":"True"
	I0318 12:43:35.816956    5712 pod_ready.go:81] duration metric: took 406.3313ms for pod "kube-proxy-vts9f" in "kube-system" namespace to be "Ready" ...
	I0318 12:43:35.816956    5712 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-642600" in "kube-system" namespace to be "Ready" ...
	I0318 12:43:35.998359    5712 request.go:629] Waited for 181.2964ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-642600
	I0318 12:43:35.998496    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-642600
	I0318 12:43:35.998496    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:35.998496    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:35.998496    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:36.002968    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:36.003701    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:36.003701    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:36.003701    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:36.003701    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:36.003701    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:36 GMT
	I0318 12:43:36.003701    5712 round_trippers.go:580]     Audit-Id: dc7c2592-b25a-4f2c-b8e1-33f1f01c8d3e
	I0318 12:43:36.003701    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:36.003981    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-642600","namespace":"kube-system","uid":"52e29d3b-d6e9-4109-916d-74123a2ab190","resourceVersion":"1857","creationTimestamp":"2024-03-18T12:18:51Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cf50844b540be8ed0b3e767db413ac8f","kubernetes.io/config.mirror":"cf50844b540be8ed0b3e767db413ac8f","kubernetes.io/config.seen":"2024-03-18T12:18:50.896438106Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:18:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5153 chars]
	I0318 12:43:36.202872    5712 request.go:629] Waited for 198.2864ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:36.203254    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:36.203254    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:36.203339    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:36.203339    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:36.206259    5712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:43:36.206738    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:36.206738    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:36.206738    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:36.206738    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:36 GMT
	I0318 12:43:36.206738    5712 round_trippers.go:580]     Audit-Id: db7e6b8a-01e5-432d-ab83-31156aae76bd
	I0318 12:43:36.206738    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:36.206738    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:36.207728    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1851","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 12:43:36.208289    5712 pod_ready.go:97] node "multinode-642600" hosting pod "kube-scheduler-multinode-642600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-642600" has status "Ready":"False"
	I0318 12:43:36.208421    5712 pod_ready.go:81] duration metric: took 391.4621ms for pod "kube-scheduler-multinode-642600" in "kube-system" namespace to be "Ready" ...
	E0318 12:43:36.208421    5712 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-642600" hosting pod "kube-scheduler-multinode-642600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-642600" has status "Ready":"False"
	I0318 12:43:36.208421    5712 pod_ready.go:38] duration metric: took 1.6022375s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 12:43:36.208421    5712 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 12:43:36.230536    5712 command_runner.go:130] > -16
	I0318 12:43:36.230614    5712 ops.go:34] apiserver oom_adj: -16
	I0318 12:43:36.230614    5712 kubeadm.go:591] duration metric: took 14.4841002s to restartPrimaryControlPlane
	I0318 12:43:36.230734    5712 kubeadm.go:393] duration metric: took 14.5553123s to StartCluster
	I0318 12:43:36.230734    5712 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:43:36.230979    5712 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0318 12:43:36.232563    5712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:43:36.234442    5712 start.go:234] Will wait 6m0s for node &{Name: IP:172.25.148.129 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 12:43:36.234442    5712 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 12:43:36.234775    5712 config.go:182] Loaded profile config "multinode-642600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 12:43:36.252883    5712 out.go:177] * Verifying Kubernetes components...
	I0318 12:43:36.259375    5712 out.go:177] * Enabled addons: 
	I0318 12:43:36.261674    5712 addons.go:505] duration metric: took 27.232ms for enable addons: enabled=[]
	I0318 12:43:36.271947    5712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:43:36.622755    5712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 12:43:36.657427    5712 node_ready.go:35] waiting up to 6m0s for node "multinode-642600" to be "Ready" ...
	I0318 12:43:36.657775    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:36.657775    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:36.657828    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:36.657828    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:36.661755    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:43:36.661755    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:36.661755    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:36.661755    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:36.661755    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:36 GMT
	I0318 12:43:36.661755    5712 round_trippers.go:580]     Audit-Id: 87fb7c74-6cf5-46ff-abe1-017bf2f0811b
	I0318 12:43:36.661755    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:36.661755    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:36.662762    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1851","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 12:43:37.169346    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:37.169346    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:37.169462    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:37.169462    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:37.174300    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:37.174300    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:37.174300    5712 round_trippers.go:580]     Audit-Id: 50a4149e-e944-4e64-a28c-82f882c3ea09
	I0318 12:43:37.174300    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:37.174300    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:37.174382    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:37.174382    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:37.174382    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:37 GMT
	I0318 12:43:37.174666    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1851","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 12:43:37.668041    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:37.668290    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:37.668290    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:37.668290    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:37.672053    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:43:37.672053    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:37.673044    5712 round_trippers.go:580]     Audit-Id: e55d01fb-974b-47a9-8954-0055cfc16a76
	I0318 12:43:37.673044    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:37.673069    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:37.673069    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:37.673069    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:37.673069    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:37 GMT
	I0318 12:43:37.673402    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1851","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 12:43:38.171467    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:38.171760    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:38.171760    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:38.171913    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:38.175560    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:43:38.175560    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:38.177184    5712 round_trippers.go:580]     Audit-Id: f6da97ee-f3f3-4de5-a853-42e211405ce1
	I0318 12:43:38.177184    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:38.177225    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:38.177225    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:38.177225    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:38.177225    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:38 GMT
	I0318 12:43:38.177694    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1851","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 12:43:38.669564    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:38.669564    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:38.669564    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:38.669564    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:38.674181    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:38.674181    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:38.674615    5712 round_trippers.go:580]     Audit-Id: 8d5f4768-48d9-433e-b712-b354ae2572da
	I0318 12:43:38.674615    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:38.674615    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:38.674615    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:38.674615    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:38.674615    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:38 GMT
	I0318 12:43:38.675324    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1851","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 12:43:38.675915    5712 node_ready.go:53] node "multinode-642600" has status "Ready":"False"
	I0318 12:43:39.169336    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:39.169408    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:39.169408    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:39.169408    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:39.173847    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:39.173847    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:39.173847    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:39 GMT
	I0318 12:43:39.173847    5712 round_trippers.go:580]     Audit-Id: b9cb5316-bd68-45d4-aac1-a671e6019340
	I0318 12:43:39.173847    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:39.174063    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:39.174063    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:39.174063    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:39.174457    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1851","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 12:43:39.669846    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:39.670176    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:39.670176    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:39.670176    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:39.675779    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:43:39.675779    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:39.675779    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:39.675779    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:39.675779    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:39.675779    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:39.675779    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:39 GMT
	I0318 12:43:39.675779    5712 round_trippers.go:580]     Audit-Id: 2a34cdf3-f8fc-43d1-83e6-ed6142e2751f
	I0318 12:43:39.676374    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1851","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 12:43:40.168133    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:40.168133    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:40.168216    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:40.168216    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:40.175565    5712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 12:43:40.175565    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:40.175565    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:40 GMT
	I0318 12:43:40.175565    5712 round_trippers.go:580]     Audit-Id: 84f538ff-d286-4938-a41d-328638b08475
	I0318 12:43:40.175945    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:40.175997    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:40.175997    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:40.175997    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:40.176224    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1851","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 12:43:40.668332    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:40.668398    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:40.668398    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:40.668398    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:40.672315    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:43:40.672315    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:40.672315    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:40.672315    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:40 GMT
	I0318 12:43:40.672315    5712 round_trippers.go:580]     Audit-Id: bb481d73-8c93-4a5b-a328-a905bbcbfc3d
	I0318 12:43:40.672315    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:40.672315    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:40.672315    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:40.672315    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1851","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 12:43:41.167865    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:41.167865    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:41.168096    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:41.168096    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:41.173010    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:41.173814    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:41.173814    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:41.173814    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:41 GMT
	I0318 12:43:41.173814    5712 round_trippers.go:580]     Audit-Id: 5394d356-fac2-42f0-a02c-00f7b2337e5f
	I0318 12:43:41.173814    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:41.173814    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:41.173814    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:41.173814    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1851","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 12:43:41.174640    5712 node_ready.go:53] node "multinode-642600" has status "Ready":"False"
	I0318 12:43:41.667620    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:41.667620    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:41.667700    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:41.667700    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:41.672058    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:41.672651    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:41.672651    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:41 GMT
	I0318 12:43:41.672651    5712 round_trippers.go:580]     Audit-Id: 555e16c9-70bd-4af7-8403-db7dde816310
	I0318 12:43:41.672651    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:41.672651    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:41.672651    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:41.672651    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:41.672917    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1851","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 12:43:42.166448    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:42.166519    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:42.166519    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:42.166519    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:42.170813    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:42.170813    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:42.170813    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:42.170813    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:42.171206    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:42.171206    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:42.171206    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:42 GMT
	I0318 12:43:42.171206    5712 round_trippers.go:580]     Audit-Id: 6568f48c-bd14-4067-a18d-cb238284ce74
	I0318 12:43:42.171554    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1851","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 12:43:42.666260    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:42.666260    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:42.666344    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:42.666344    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:42.670683    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:42.670683    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:42.670898    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:42.670898    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:42.670961    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:42.670961    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:42.670961    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:42 GMT
	I0318 12:43:42.670961    5712 round_trippers.go:580]     Audit-Id: 54be4d49-ce73-478f-a6ab-7be0188c71af
	I0318 12:43:42.671057    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1851","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 12:43:43.164942    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:43.164942    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:43.164942    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:43.164942    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:43.169630    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:43.169630    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:43.170038    5712 round_trippers.go:580]     Audit-Id: df0e439a-ebfe-4b43-a800-dad5516e9465
	I0318 12:43:43.170038    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:43.170038    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:43.170038    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:43.170038    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:43.170038    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:43 GMT
	I0318 12:43:43.170410    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1851","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 12:43:43.667676    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:43.667732    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:43.667732    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:43.667732    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:43.671391    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:43:43.671391    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:43.671391    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:43.671391    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:43.671391    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:43 GMT
	I0318 12:43:43.671391    5712 round_trippers.go:580]     Audit-Id: f812b4b8-f00b-4242-89e2-9f6535b60454
	I0318 12:43:43.671391    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:43.671391    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:43.671391    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:43.672542    5712 node_ready.go:53] node "multinode-642600" has status "Ready":"False"
	I0318 12:43:44.169018    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:44.169084    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:44.169084    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:44.169084    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:44.173851    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:44.173851    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:44.174049    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:44 GMT
	I0318 12:43:44.174049    5712 round_trippers.go:580]     Audit-Id: 28931d4c-4236-4207-a7a1-ec6c251b13b9
	I0318 12:43:44.174049    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:44.174049    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:44.174049    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:44.174049    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:44.174573    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:44.672676    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:44.672676    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:44.672676    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:44.672676    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:44.676120    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:43:44.676463    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:44.676463    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:44 GMT
	I0318 12:43:44.676463    5712 round_trippers.go:580]     Audit-Id: 4fd437ce-cd21-47ce-8d45-96fe2cca4950
	I0318 12:43:44.676463    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:44.676463    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:44.676463    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:44.676463    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:44.676952    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:45.158939    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:45.158939    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:45.158939    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:45.158939    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:45.164575    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:43:45.164575    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:45.164575    5712 round_trippers.go:580]     Audit-Id: 053f2122-1224-41ba-ae01-94754d5ab6c7
	I0318 12:43:45.164575    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:45.164575    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:45.164575    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:45.164691    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:45.164691    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:45 GMT
	I0318 12:43:45.164961    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:45.662331    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:45.662331    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:45.662331    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:45.662331    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:45.667927    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:43:45.668006    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:45.668006    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:45.668006    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:45 GMT
	I0318 12:43:45.668006    5712 round_trippers.go:580]     Audit-Id: bb4f3337-efeb-4bde-bb59-1faa7853281f
	I0318 12:43:45.668006    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:45.668006    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:45.668006    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:45.668316    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:46.158234    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:46.158425    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:46.158425    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:46.158425    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:46.166533    5712 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 12:43:46.166533    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:46.166533    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:46.166533    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:46.166533    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:46.166533    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:46 GMT
	I0318 12:43:46.166533    5712 round_trippers.go:580]     Audit-Id: 5c78fcb5-4079-4939-a3b2-883b77ed2b76
	I0318 12:43:46.166533    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:46.167680    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:46.168224    5712 node_ready.go:53] node "multinode-642600" has status "Ready":"False"
	I0318 12:43:46.661537    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:46.661689    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:46.661689    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:46.661781    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:46.665138    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:43:46.665138    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:46.665138    5712 round_trippers.go:580]     Audit-Id: 4d8ac18f-15ce-48d5-b368-ae69e0a81030
	I0318 12:43:46.665138    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:46.665138    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:46.665138    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:46.665138    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:46.666214    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:46 GMT
	I0318 12:43:46.666404    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:47.162055    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:47.162138    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:47.162138    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:47.162138    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:47.168042    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:43:47.168042    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:47.168042    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:47.168042    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:47.168042    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:47.168042    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:47.168042    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:47 GMT
	I0318 12:43:47.168042    5712 round_trippers.go:580]     Audit-Id: 161d3d6a-5e53-4618-8fd8-4737398dcb18
	I0318 12:43:47.168042    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:47.665742    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:47.665742    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:47.665742    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:47.665742    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:47.671798    5712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:43:47.671798    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:47.671798    5712 round_trippers.go:580]     Audit-Id: 2ebfc400-c4be-46e0-ac7f-f37c29495c7c
	I0318 12:43:47.671798    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:47.671798    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:47.671798    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:47.671798    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:47.671798    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:47 GMT
	I0318 12:43:47.671798    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:48.169986    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:48.170244    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:48.170244    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:48.170244    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:48.175065    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:48.175065    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:48.175149    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:48.175149    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:48.175149    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:48.175149    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:48 GMT
	I0318 12:43:48.175149    5712 round_trippers.go:580]     Audit-Id: 397a45e5-2cbf-43e4-9444-436e2b4bf965
	I0318 12:43:48.175149    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:48.176215    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:48.176215    5712 node_ready.go:53] node "multinode-642600" has status "Ready":"False"
	I0318 12:43:48.671247    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:48.671247    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:48.671247    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:48.671247    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:48.675849    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:48.676113    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:48.676113    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:48 GMT
	I0318 12:43:48.676186    5712 round_trippers.go:580]     Audit-Id: edce76cf-b374-48e6-92d4-a7ee22289096
	I0318 12:43:48.676186    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:48.676186    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:48.676186    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:48.676186    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:48.676249    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:49.170409    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:49.170494    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:49.170494    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:49.170494    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:49.175368    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:49.175612    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:49.175612    5712 round_trippers.go:580]     Audit-Id: 5551d1fc-bd82-4f98-b17c-f96116e78ebd
	I0318 12:43:49.175612    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:49.175612    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:49.175682    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:49.175738    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:49.181524    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:49 GMT
	I0318 12:43:49.181890    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:49.668925    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:49.669126    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:49.669126    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:49.669126    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:49.675026    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:43:49.675026    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:49.675026    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:49.675026    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:49.675026    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:49.675026    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:49 GMT
	I0318 12:43:49.675026    5712 round_trippers.go:580]     Audit-Id: cac1e2be-820c-4636-91ac-af38cbdd2b7a
	I0318 12:43:49.675026    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:49.675712    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:50.165361    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:50.165361    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:50.165361    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:50.165361    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:50.169951    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:50.169951    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:50.169951    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:50.169951    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:50.169951    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:50 GMT
	I0318 12:43:50.169951    5712 round_trippers.go:580]     Audit-Id: bc032467-d888-45b8-a3ff-cdaa0ec86ea0
	I0318 12:43:50.170285    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:50.170285    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:50.170633    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:50.666911    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:50.666911    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:50.666911    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:50.666911    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:50.670475    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:43:50.670475    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:50.670475    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:50 GMT
	I0318 12:43:50.671245    5712 round_trippers.go:580]     Audit-Id: 8c5f782c-703c-473e-aea6-23f07efcb29a
	I0318 12:43:50.671245    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:50.671245    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:50.671245    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:50.671245    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:50.671401    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:50.671984    5712 node_ready.go:53] node "multinode-642600" has status "Ready":"False"
	I0318 12:43:51.166483    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:51.166719    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:51.166719    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:51.166719    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:51.173566    5712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:43:51.173566    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:51.173566    5712 round_trippers.go:580]     Audit-Id: 79c3dd9b-0dc4-4389-b982-c990ebc68b1f
	I0318 12:43:51.173566    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:51.173566    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:51.173566    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:51.173566    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:51.173566    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:51 GMT
	I0318 12:43:51.174160    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:51.666702    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:51.666702    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:51.666702    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:51.666702    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:51.671139    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:51.671514    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:51.671514    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:51.671514    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:51.671514    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:51 GMT
	I0318 12:43:51.671514    5712 round_trippers.go:580]     Audit-Id: d7d1e139-04b9-4dfc-8ca5-67a5d11754a6
	I0318 12:43:51.671514    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:51.671514    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:51.671817    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:52.165682    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:52.165918    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:52.165918    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:52.165918    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:52.173492    5712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 12:43:52.173492    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:52.173492    5712 round_trippers.go:580]     Audit-Id: c0cfb54c-d963-41b4-89b0-f38b8b8a26f1
	I0318 12:43:52.173492    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:52.173492    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:52.173492    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:52.173492    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:52.173492    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:52 GMT
	I0318 12:43:52.174184    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:52.663849    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:52.663849    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:52.663849    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:52.663849    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:52.669926    5712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:43:52.669926    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:52.669926    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:52.669926    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:52.669926    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:52.669926    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:52.669926    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:52 GMT
	I0318 12:43:52.669926    5712 round_trippers.go:580]     Audit-Id: 60a9cca1-02b2-473e-a566-5ab730457c66
	I0318 12:43:52.670629    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:53.161583    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:53.161583    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:53.161583    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:53.161583    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:53.165384    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:43:53.165384    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:53.165384    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:53.165384    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:53.165384    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:53.165384    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:53 GMT
	I0318 12:43:53.165384    5712 round_trippers.go:580]     Audit-Id: c17c0652-4e63-4448-819b-acf4a87a554d
	I0318 12:43:53.165384    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:53.165384    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:53.165384    5712 node_ready.go:53] node "multinode-642600" has status "Ready":"False"
	I0318 12:43:53.661918    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:53.662000    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:53.662000    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:53.662000    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:53.665448    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:43:53.666305    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:53.666305    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:53.666305    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:53.666305    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:53 GMT
	I0318 12:43:53.666305    5712 round_trippers.go:580]     Audit-Id: 5d5e4360-1a5e-49b0-ae56-832f71c8d1d2
	I0318 12:43:53.666305    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:53.666305    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:53.666543    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:54.162060    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:54.162060    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:54.162060    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:54.162060    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:54.166540    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:54.166540    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:54.166540    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:54.166540    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:54.166540    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:54.166540    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:54.166540    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:54 GMT
	I0318 12:43:54.166540    5712 round_trippers.go:580]     Audit-Id: bbedea72-b620-4b33-93be-fef37cf29dce
	I0318 12:43:54.166540    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:54.661910    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:54.662007    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:54.662086    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:54.662086    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:54.669002    5712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:43:54.669002    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:54.669002    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:54.669002    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:54.669002    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:54 GMT
	I0318 12:43:54.669002    5712 round_trippers.go:580]     Audit-Id: f1b86ded-4167-413e-af24-1ab28201017f
	I0318 12:43:54.669002    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:54.669002    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:54.669002    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:55.162685    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:55.162961    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:55.162961    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:55.162961    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:55.167303    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:55.167457    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:55.167457    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:55.167457    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:55.167457    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:55.167457    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:55.167457    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:55 GMT
	I0318 12:43:55.167457    5712 round_trippers.go:580]     Audit-Id: 4621dbda-9972-451d-96da-d83a1ff8ee5a
	I0318 12:43:55.167879    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:55.168384    5712 node_ready.go:53] node "multinode-642600" has status "Ready":"False"
	I0318 12:43:55.663999    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:55.664100    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:55.664100    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:55.664100    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:55.668466    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:55.668466    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:55.668466    5712 round_trippers.go:580]     Audit-Id: ac6e8358-2ac8-4c22-bb43-e335adc4a9ac
	I0318 12:43:55.668466    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:55.668466    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:55.668466    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:55.668466    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:55.668466    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:55 GMT
	I0318 12:43:55.669317    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:56.162622    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:56.162697    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:56.162697    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:56.162910    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:56.167973    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:43:56.168035    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:56.168035    5712 round_trippers.go:580]     Audit-Id: efc38058-b725-488d-a35d-0924fc3cf052
	I0318 12:43:56.168035    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:56.168035    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:56.168035    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:56.168035    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:56.168035    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:56 GMT
	I0318 12:43:56.168035    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:56.661868    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:56.661868    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:56.661868    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:56.662125    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:56.667732    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:43:56.667732    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:56.668126    5712 round_trippers.go:580]     Audit-Id: 4e5d7050-714f-4156-bc75-88983ab263d7
	I0318 12:43:56.668126    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:56.668126    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:56.668126    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:56.668126    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:56.668126    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:56 GMT
	I0318 12:43:56.668341    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:57.162039    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:57.162172    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:57.162172    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:57.162172    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:57.166583    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:57.166583    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:57.166583    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:57.166583    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:57.166583    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:57 GMT
	I0318 12:43:57.166583    5712 round_trippers.go:580]     Audit-Id: b172b030-90bb-40b0-87a7-48a884d4e9cf
	I0318 12:43:57.166583    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:57.166583    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:57.166583    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:57.660828    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:57.660828    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:57.660828    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:57.660828    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:57.665769    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:57.666027    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:57.666027    5712 round_trippers.go:580]     Audit-Id: 6efc7aba-bd41-4b27-ac82-a0d753210de8
	I0318 12:43:57.666027    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:57.666027    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:57.666027    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:57.666027    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:57.666027    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:57 GMT
	I0318 12:43:57.666294    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:57.666928    5712 node_ready.go:53] node "multinode-642600" has status "Ready":"False"
	I0318 12:43:58.161421    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:58.161483    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:58.161483    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:58.161483    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:58.166228    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:58.166228    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:58.166228    5712 round_trippers.go:580]     Audit-Id: d9328c30-d055-4918-8a16-a23e32ed32b8
	I0318 12:43:58.166228    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:58.166329    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:58.166329    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:58.166329    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:58.166329    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:58 GMT
	I0318 12:43:58.166721    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:58.660815    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:58.660815    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:58.660815    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:58.660815    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:58.665000    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:58.665000    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:58.665000    5712 round_trippers.go:580]     Audit-Id: 4eecb8b2-8566-4a52-a80c-71268a4e990e
	I0318 12:43:58.665000    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:58.665000    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:58.665000    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:58.665000    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:58.665000    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:58 GMT
	I0318 12:43:58.665797    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:59.162912    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:59.163213    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:59.163213    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:59.163358    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:59.168861    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:43:59.168861    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:59.168861    5712 round_trippers.go:580]     Audit-Id: ec9cd06a-239a-4939-864c-7e973a193e42
	I0318 12:43:59.168861    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:59.168861    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:59.168861    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:59.168861    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:59.168861    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:59 GMT
	I0318 12:43:59.169544    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:59.665059    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:59.665129    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:59.665129    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:59.665129    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:59.668877    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:43:59.668877    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:59.668877    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:59.669308    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:59.669308    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:59.669308    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:59 GMT
	I0318 12:43:59.669308    5712 round_trippers.go:580]     Audit-Id: 3a78e90a-cda6-4dd6-9323-2386ea76d45d
	I0318 12:43:59.669308    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:59.669873    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:59.670470    5712 node_ready.go:53] node "multinode-642600" has status "Ready":"False"
	I0318 12:44:00.164173    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:00.164173    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:00.164173    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:00.164457    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:00.168667    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:00.168667    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:00.169177    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:00.169177    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:00 GMT
	I0318 12:44:00.169177    5712 round_trippers.go:580]     Audit-Id: 099c4fb6-e1ce-4e00-a686-367502e4bbfa
	I0318 12:44:00.169177    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:00.169177    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:00.169252    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:00.169312    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:00.660552    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:00.660552    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:00.660837    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:00.660837    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:00.664713    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:00.664915    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:00.664915    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:00.664915    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:00 GMT
	I0318 12:44:00.664915    5712 round_trippers.go:580]     Audit-Id: b12cc792-b1d4-4147-a4e3-2b277c542231
	I0318 12:44:00.664915    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:00.664915    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:00.664915    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:00.665155    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:01.160414    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:01.160414    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:01.160414    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:01.160414    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:01.164971    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:01.164971    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:01.164971    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:01.165421    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:01.165421    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:01.165421    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:01 GMT
	I0318 12:44:01.165421    5712 round_trippers.go:580]     Audit-Id: 10cec0ac-c421-4a5f-ba97-35305c398bac
	I0318 12:44:01.165421    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:01.165937    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:01.658802    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:01.658802    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:01.658878    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:01.658878    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:01.664780    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:44:01.665564    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:01.665564    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:01.665564    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:01.665564    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:01.665564    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:01.665564    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:01 GMT
	I0318 12:44:01.665564    5712 round_trippers.go:580]     Audit-Id: 425ee5d9-7b7b-4dc0-b098-ba6c41e0c45a
	I0318 12:44:01.665810    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:02.158548    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:02.158637    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:02.158637    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:02.158637    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:02.162950    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:02.162950    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:02.162950    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:02.162950    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:02 GMT
	I0318 12:44:02.162950    5712 round_trippers.go:580]     Audit-Id: d80f6055-9b8e-4f6b-b7f4-7d804fbd67c9
	I0318 12:44:02.162950    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:02.162950    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:02.162950    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:02.162950    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:02.163939    5712 node_ready.go:53] node "multinode-642600" has status "Ready":"False"
	I0318 12:44:02.673079    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:02.673079    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:02.673079    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:02.673435    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:02.682886    5712 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0318 12:44:02.682886    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:02.682886    5712 round_trippers.go:580]     Audit-Id: 4f4d2e5b-1ac4-44c5-b670-1d184c2aaca2
	I0318 12:44:02.682886    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:02.682886    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:02.682886    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:02.682886    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:02.682886    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:02 GMT
	I0318 12:44:02.682886    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:03.161138    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:03.161138    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:03.161138    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:03.161138    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:03.167501    5712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:44:03.167501    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:03.167501    5712 round_trippers.go:580]     Audit-Id: 1dd95ea9-a79a-4ea3-b6f5-00f527b1a9cf
	I0318 12:44:03.167501    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:03.167501    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:03.167501    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:03.167501    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:03.167501    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:03 GMT
	I0318 12:44:03.169299    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:03.661075    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:03.661294    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:03.661294    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:03.661294    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:03.666402    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:44:03.666402    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:03.666402    5712 round_trippers.go:580]     Audit-Id: 8e5f19dd-f7ca-4733-acb1-e2a5b1cbd95c
	I0318 12:44:03.666402    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:03.666402    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:03.666402    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:03.666402    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:03.666402    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:03 GMT
	I0318 12:44:03.666402    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:04.165006    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:04.165100    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:04.165100    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:04.165100    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:04.195897    5712 round_trippers.go:574] Response Status: 200 OK in 30 milliseconds
	I0318 12:44:04.196123    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:04.196123    5712 round_trippers.go:580]     Audit-Id: e80f46d4-1594-44cb-bb65-da3ae6b924e9
	I0318 12:44:04.196123    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:04.196123    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:04.196123    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:04.196123    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:04.196123    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:04 GMT
	I0318 12:44:04.196515    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:04.197042    5712 node_ready.go:53] node "multinode-642600" has status "Ready":"False"
	I0318 12:44:04.667621    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:04.667621    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:04.667621    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:04.667621    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:04.672348    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:04.672348    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:04.672348    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:04.672348    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:04 GMT
	I0318 12:44:04.672348    5712 round_trippers.go:580]     Audit-Id: 085eb688-0979-446e-a296-57e2b838aa06
	I0318 12:44:04.672348    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:04.672348    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:04.672348    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:04.673077    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:05.168046    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:05.168046    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:05.168046    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:05.168046    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:05.173150    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:44:05.173206    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:05.173206    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:05.173206    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:05.173206    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:05.173206    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:05.173206    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:05 GMT
	I0318 12:44:05.173206    5712 round_trippers.go:580]     Audit-Id: d3eb1728-48b9-422b-885d-7f724e20c866
	I0318 12:44:05.173206    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:05.666758    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:05.666758    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:05.667033    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:05.667033    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:05.670801    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:05.671522    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:05.671522    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:05.671522    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:05.671522    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:05 GMT
	I0318 12:44:05.671522    5712 round_trippers.go:580]     Audit-Id: 24d35937-1634-4588-b834-3ce3d677167e
	I0318 12:44:05.671522    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:05.671522    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:05.671786    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:06.167467    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:06.167467    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:06.167467    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:06.167612    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:06.172084    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:06.172337    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:06.172382    5712 round_trippers.go:580]     Audit-Id: 6f366b87-644e-426b-a3d3-b690c46d6eec
	I0318 12:44:06.172382    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:06.172382    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:06.172382    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:06.172382    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:06.172382    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:06 GMT
	I0318 12:44:06.172882    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:06.666191    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:06.666267    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:06.666267    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:06.666267    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:06.670638    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:06.670943    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:06.670943    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:06.671143    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:06.671143    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:06.671143    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:06.671143    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:06 GMT
	I0318 12:44:06.671143    5712 round_trippers.go:580]     Audit-Id: bc13a46e-e1d5-4d0d-b74a-f988d9488f28
	I0318 12:44:06.671439    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:06.671935    5712 node_ready.go:53] node "multinode-642600" has status "Ready":"False"
	I0318 12:44:07.166156    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:07.166428    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:07.166497    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:07.166497    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:07.173264    5712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:44:07.173264    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:07.173721    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:07 GMT
	I0318 12:44:07.173721    5712 round_trippers.go:580]     Audit-Id: 1fa2bf9e-8a6c-42c1-b181-73897d659493
	I0318 12:44:07.173721    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:07.173721    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:07.173721    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:07.173721    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:07.173721    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:07.662739    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:07.662739    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:07.662739    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:07.662739    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:07.666309    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:07.667103    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:07.667167    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:07 GMT
	I0318 12:44:07.667167    5712 round_trippers.go:580]     Audit-Id: 898260f1-de67-4354-b2bd-af0296c415a9
	I0318 12:44:07.667167    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:07.667167    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:07.667167    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:07.667167    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:07.667422    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:08.166089    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:08.166089    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:08.166089    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:08.166164    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:08.169961    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:08.170950    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:08.170950    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:08.170950    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:08.170950    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:08.170950    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:08.170950    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:08 GMT
	I0318 12:44:08.170950    5712 round_trippers.go:580]     Audit-Id: ff7854a6-d09d-41b0-a2da-3b6605a1b799
	I0318 12:44:08.171448    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:08.665335    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:08.665395    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:08.665395    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:08.665395    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:08.669961    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:08.669961    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:08.669961    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:08.670372    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:08.670372    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:08.670372    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:08 GMT
	I0318 12:44:08.670372    5712 round_trippers.go:580]     Audit-Id: 7d283d31-8c0a-4899-8cb6-2b7b6aebaf62
	I0318 12:44:08.670372    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:08.670546    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:09.165579    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:09.165579    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:09.165579    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:09.165579    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:09.169957    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:09.170772    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:09.170772    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:09 GMT
	I0318 12:44:09.170772    5712 round_trippers.go:580]     Audit-Id: d39956e3-7811-4048-b0f0-a203dab89f57
	I0318 12:44:09.170772    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:09.170772    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:09.170772    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:09.170772    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:09.170772    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:09.171686    5712 node_ready.go:53] node "multinode-642600" has status "Ready":"False"
	I0318 12:44:09.668898    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:09.668985    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:09.668985    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:09.668985    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:09.673241    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:09.673241    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:09.673241    5712 round_trippers.go:580]     Audit-Id: 7bd9a8f7-f224-4080-80dc-c3deb3f3adab
	I0318 12:44:09.673241    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:09.673241    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:09.673241    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:09.673241    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:09.673241    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:09 GMT
	I0318 12:44:09.674130    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:10.168195    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:10.168195    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:10.168292    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:10.168292    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:10.171597    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:10.171905    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:10.171905    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:10.171905    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:10.171905    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:10.171905    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:10.171905    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:10 GMT
	I0318 12:44:10.171905    5712 round_trippers.go:580]     Audit-Id: dd6fedc9-4dca-4008-ac70-cf0695ab5a03
	I0318 12:44:10.172347    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:10.670758    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:10.670955    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:10.670955    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:10.670955    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:10.676699    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:44:10.677122    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:10.677122    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:10.677122    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:10.677122    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:10.677122    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:10.677193    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:10 GMT
	I0318 12:44:10.677193    5712 round_trippers.go:580]     Audit-Id: 475fd1f4-0476-482d-9b0a-474c4f9acc5e
	I0318 12:44:10.677907    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:11.171902    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:11.171902    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:11.171994    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:11.171994    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:11.198871    5712 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0318 12:44:11.199048    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:11.199111    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:11.199111    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:11.199111    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:11.199111    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:11 GMT
	I0318 12:44:11.199111    5712 round_trippers.go:580]     Audit-Id: 826ebd82-eecd-4302-b789-b813d1d18b66
	I0318 12:44:11.199111    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:11.199111    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:11.200096    5712 node_ready.go:53] node "multinode-642600" has status "Ready":"False"
	I0318 12:44:11.662448    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:11.662448    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:11.662503    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:11.662503    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:11.666914    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:11.666944    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:11.667021    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:11 GMT
	I0318 12:44:11.667021    5712 round_trippers.go:580]     Audit-Id: 0768de30-4e2f-4267-9394-814a7468bc6f
	I0318 12:44:11.667021    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:11.667021    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:11.667021    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:11.667021    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:11.667021    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2012","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0318 12:44:11.667611    5712 node_ready.go:49] node "multinode-642600" has status "Ready":"True"
	I0318 12:44:11.667611    5712 node_ready.go:38] duration metric: took 35.0099667s for node "multinode-642600" to be "Ready" ...
	I0318 12:44:11.667611    5712 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 12:44:11.667611    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods
	I0318 12:44:11.667611    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:11.667611    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:11.667611    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:11.673018    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:44:11.673863    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:11.673863    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:11.673863    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:11.673863    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:11.673863    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:11 GMT
	I0318 12:44:11.673863    5712 round_trippers.go:580]     Audit-Id: 51660724-8c83-40d6-941f-d102519e2f70
	I0318 12:44:11.673863    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:11.675246    5712 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2013"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83076 chars]
	I0318 12:44:11.679124    5712 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fgn7v" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:11.679247    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:11.679247    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:11.679357    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:11.679357    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:11.681576    5712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:11.681576    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:11.681576    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:11.681576    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:11.681576    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:11.681576    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:11.681576    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:11 GMT
	I0318 12:44:11.682568    5712 round_trippers.go:580]     Audit-Id: 68692235-b9a4-41aa-b52b-d932b2baf2b3
	I0318 12:44:11.682820    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:11.683623    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:11.683677    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:11.683677    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:11.683677    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:11.687249    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:11.687249    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:11.687249    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:11.687249    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:11.687249    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:11.687249    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:11 GMT
	I0318 12:44:11.687249    5712 round_trippers.go:580]     Audit-Id: 87ff9236-27d7-45d4-a6ba-5c07a01fd91b
	I0318 12:44:11.687249    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:11.687249    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2012","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0318 12:44:12.189290    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:12.189290    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:12.189290    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:12.189290    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:12.196764    5712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 12:44:12.196764    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:12.196764    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:12.196764    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:12.196764    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:12.196764    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:12 GMT
	I0318 12:44:12.196764    5712 round_trippers.go:580]     Audit-Id: 28ad4020-5575-4e1b-ac1d-227b725cbc4c
	I0318 12:44:12.196764    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:12.197406    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:12.198214    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:12.198214    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:12.198214    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:12.198214    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:12.201458    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:12.201528    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:12.201528    5712 round_trippers.go:580]     Audit-Id: 23ef2e71-f422-43a3-b48e-67f0ccafb647
	I0318 12:44:12.201528    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:12.201528    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:12.201528    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:12.201528    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:12.201652    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:12 GMT
	I0318 12:44:12.202389    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2012","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0318 12:44:12.689821    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:12.689895    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:12.689895    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:12.689895    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:12.695086    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:44:12.695086    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:12.695086    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:12 GMT
	I0318 12:44:12.695086    5712 round_trippers.go:580]     Audit-Id: 3909224e-955b-4e27-9bcb-fcd165c794b4
	I0318 12:44:12.695086    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:12.695086    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:12.695086    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:12.695086    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:12.695424    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:12.695616    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:12.695616    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:12.695616    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:12.695616    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:12.699509    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:12.699509    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:12.699509    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:12 GMT
	I0318 12:44:12.699509    5712 round_trippers.go:580]     Audit-Id: 4fba9b28-f2f3-45c7-b234-4ee90fa733f7
	I0318 12:44:12.699509    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:12.699509    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:12.699509    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:12.699509    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:12.699509    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2012","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0318 12:44:13.190512    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:13.190512    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:13.190512    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:13.190512    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:13.195114    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:13.195218    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:13.195218    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:13.195218    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:13.195218    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:13 GMT
	I0318 12:44:13.195218    5712 round_trippers.go:580]     Audit-Id: 4e79249d-da04-4313-93a9-3354b854e0b8
	I0318 12:44:13.195218    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:13.195218    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:13.195218    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:13.195847    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:13.195847    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:13.195847    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:13.195847    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:13.199600    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:13.200093    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:13.200093    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:13 GMT
	I0318 12:44:13.200093    5712 round_trippers.go:580]     Audit-Id: 2bbe5cdf-efff-4dc6-a479-6d79cddb9e6d
	I0318 12:44:13.200093    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:13.200093    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:13.200093    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:13.200093    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:13.200961    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2012","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0318 12:44:13.689055    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:13.689055    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:13.689055    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:13.689055    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:13.696084    5712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 12:44:13.696084    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:13.696084    5712 round_trippers.go:580]     Audit-Id: f2a50618-4423-4ac6-aec3-94fedde79059
	I0318 12:44:13.696084    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:13.696084    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:13.696084    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:13.696084    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:13.696084    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:13 GMT
	I0318 12:44:13.696084    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:13.697207    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:13.697207    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:13.697207    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:13.697207    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:13.701058    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:13.701204    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:13.701204    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:13.701204    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:13.701204    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:13.701204    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:13 GMT
	I0318 12:44:13.701204    5712 round_trippers.go:580]     Audit-Id: fc37cca9-bcba-45a3-83b3-c52fd084afd9
	I0318 12:44:13.701204    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:13.701513    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:13.702009    5712 pod_ready.go:102] pod "coredns-5dd5756b68-fgn7v" in "kube-system" namespace has status "Ready":"False"
	I0318 12:44:14.187179    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:14.189894    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:14.190057    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:14.190057    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:14.193266    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:14.193960    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:14.193960    5712 round_trippers.go:580]     Audit-Id: a7cc1a20-6fe3-4d80-9ab2-8fad4d555963
	I0318 12:44:14.193960    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:14.193960    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:14.193960    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:14.193960    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:14.193960    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:14 GMT
	I0318 12:44:14.194292    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:14.195492    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:14.195492    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:14.195582    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:14.195582    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:14.198859    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:14.199018    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:14.199018    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:14 GMT
	I0318 12:44:14.199018    5712 round_trippers.go:580]     Audit-Id: 4f4d0bf8-3015-400a-a8bb-4e7a9bd8ab66
	I0318 12:44:14.199018    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:14.199018    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:14.199018    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:14.199018    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:14.199387    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:14.686852    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:14.686852    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:14.686852    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:14.686852    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:14.692315    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:44:14.692315    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:14.692315    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:14.692315    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:14.692315    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:14.692315    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:14 GMT
	I0318 12:44:14.692315    5712 round_trippers.go:580]     Audit-Id: cff36635-0841-44ed-9c51-8f1b0b3c60f0
	I0318 12:44:14.692315    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:14.692315    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:14.693313    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:14.693394    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:14.693394    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:14.693394    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:14.696634    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:14.696634    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:14.697142    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:14.697142    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:14.697142    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:14.697142    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:14 GMT
	I0318 12:44:14.697142    5712 round_trippers.go:580]     Audit-Id: e6729c94-2e8e-424c-94ad-7ef4ff4b4226
	I0318 12:44:14.697142    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:14.697648    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:15.188309    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:15.188309    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:15.188309    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:15.188309    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:15.192917    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:15.192917    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:15.192917    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:15.192917    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:15.192917    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:15.192917    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:15 GMT
	I0318 12:44:15.192917    5712 round_trippers.go:580]     Audit-Id: 3285ed7b-7ced-468f-a841-9faa545a27cc
	I0318 12:44:15.192917    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:15.194509    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:15.194967    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:15.194967    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:15.194967    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:15.194967    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:15.201267    5712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:44:15.201267    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:15.201267    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:15.201267    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:15 GMT
	I0318 12:44:15.201267    5712 round_trippers.go:580]     Audit-Id: 47d7aa2f-b56d-419e-8a32-c595a9d8290b
	I0318 12:44:15.201267    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:15.201267    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:15.201267    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:15.201996    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:15.688423    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:15.688485    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:15.688485    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:15.688485    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:15.692666    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:15.692666    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:15.692666    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:15.692666    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:15 GMT
	I0318 12:44:15.692666    5712 round_trippers.go:580]     Audit-Id: ecf79032-3aa7-4bc6-82d9-2ab8eb39336d
	I0318 12:44:15.692666    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:15.692666    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:15.692666    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:15.692666    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:15.693763    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:15.693763    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:15.693763    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:15.693763    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:15.697739    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:15.697739    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:15.697739    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:15.697739    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:15 GMT
	I0318 12:44:15.697739    5712 round_trippers.go:580]     Audit-Id: a0300dd2-98c2-4a28-9836-b8a0594f4b97
	I0318 12:44:15.697739    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:15.697739    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:15.697739    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:15.698346    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:16.189806    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:16.190125    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:16.190125    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:16.190125    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:16.194747    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:16.195302    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:16.195302    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:16 GMT
	I0318 12:44:16.195302    5712 round_trippers.go:580]     Audit-Id: 05ea2c45-deb2-4d18-be4e-93f7ba227959
	I0318 12:44:16.195399    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:16.195399    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:16.195438    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:16.195438    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:16.195670    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:16.196293    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:16.196293    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:16.196293    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:16.196293    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:16.199853    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:16.200850    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:16.200877    5712 round_trippers.go:580]     Audit-Id: 41764b48-4925-4b1b-a501-f26dd948e26b
	I0318 12:44:16.200877    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:16.200877    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:16.200877    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:16.200877    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:16.200877    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:16 GMT
	I0318 12:44:16.200930    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:16.201669    5712 pod_ready.go:102] pod "coredns-5dd5756b68-fgn7v" in "kube-system" namespace has status "Ready":"False"
	I0318 12:44:16.679732    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:16.679839    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:16.679839    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:16.679839    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:16.683217    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:16.683217    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:16.683217    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:16.683217    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:16.683217    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:16.683217    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:16.684224    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:16 GMT
	I0318 12:44:16.684224    5712 round_trippers.go:580]     Audit-Id: 6608c079-a2d7-43cd-ad67-050578afc1c7
	I0318 12:44:16.684505    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:16.685391    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:16.685391    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:16.685391    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:16.685391    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:16.688745    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:16.688745    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:16.688745    5712 round_trippers.go:580]     Audit-Id: a2f0aa03-092b-4358-9b32-baa0817a5b93
	I0318 12:44:16.688745    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:16.688745    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:16.688745    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:16.688745    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:16.688745    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:16 GMT
	I0318 12:44:16.689438    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:17.194490    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:17.194490    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:17.194490    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:17.194490    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:17.199181    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:17.199181    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:17.199181    5712 round_trippers.go:580]     Audit-Id: 05ee3b1e-0451-4971-9b1a-020bcd3ac56a
	I0318 12:44:17.199181    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:17.199181    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:17.199181    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:17.199181    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:17.199181    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:17 GMT
	I0318 12:44:17.199633    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:17.200002    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:17.200584    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:17.200584    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:17.200584    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:17.203962    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:17.203962    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:17.203962    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:17.203962    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:17.203962    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:17.203962    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:17.203962    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:17 GMT
	I0318 12:44:17.203962    5712 round_trippers.go:580]     Audit-Id: bb5ac872-8323-4a4d-a0ec-a35b06d5db0c
	I0318 12:44:17.204590    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:17.682732    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:17.682732    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:17.682732    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:17.682732    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:17.686405    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:17.686888    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:17.686948    5712 round_trippers.go:580]     Audit-Id: 50db086d-a553-4435-81c2-2b0d2e1ec308
	I0318 12:44:17.686948    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:17.686948    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:17.686948    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:17.687007    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:17.687042    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:17 GMT
	I0318 12:44:17.687076    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:17.688025    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:17.688077    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:17.688077    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:17.688105    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:17.690996    5712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:17.690996    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:17.690996    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:17.690996    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:17 GMT
	I0318 12:44:17.690996    5712 round_trippers.go:580]     Audit-Id: ed2b67b5-49dc-4fc3-a73a-9504062879ea
	I0318 12:44:17.691876    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:17.691938    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:17.691938    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:17.692532    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:18.186906    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:18.186983    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:18.186983    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:18.186983    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:18.191508    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:18.191710    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:18.191710    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:18 GMT
	I0318 12:44:18.191710    5712 round_trippers.go:580]     Audit-Id: fed979db-c6cc-4e5b-9d3b-057dbcd455a7
	I0318 12:44:18.191710    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:18.191710    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:18.191810    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:18.191810    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:18.192038    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:18.192303    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:18.192845    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:18.192845    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:18.192845    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:18.196147    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:18.196147    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:18.196147    5712 round_trippers.go:580]     Audit-Id: ddf6c14c-7522-42b4-83f3-b6b9b9732224
	I0318 12:44:18.196147    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:18.196147    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:18.196147    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:18.196147    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:18.196147    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:18 GMT
	I0318 12:44:18.196447    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:18.687290    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:18.687402    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:18.687402    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:18.687402    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:18.691698    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:18.692219    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:18.692219    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:18.692219    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:18.692219    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:18.692219    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:18 GMT
	I0318 12:44:18.692219    5712 round_trippers.go:580]     Audit-Id: 357060ba-d2e7-4223-9fa8-c16bba65f32a
	I0318 12:44:18.692219    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:18.692512    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:18.693183    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:18.693183    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:18.693183    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:18.693277    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:18.696420    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:18.696420    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:18.696420    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:18.696420    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:18.696420    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:18.696420    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:18 GMT
	I0318 12:44:18.696420    5712 round_trippers.go:580]     Audit-Id: eced3421-3a7f-44af-97cf-2857f3c4baa5
	I0318 12:44:18.696420    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:18.697337    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:18.697779    5712 pod_ready.go:102] pod "coredns-5dd5756b68-fgn7v" in "kube-system" namespace has status "Ready":"False"
	I0318 12:44:19.184814    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:19.187660    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:19.187660    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:19.187660    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:19.191157    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:19.191157    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:19.191157    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:19.191157    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:19.191157    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:19.192206    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:19 GMT
	I0318 12:44:19.192206    5712 round_trippers.go:580]     Audit-Id: d4ed1ae8-9361-4227-9ae3-5b041ed49910
	I0318 12:44:19.192206    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:19.192456    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:19.193304    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:19.193304    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:19.193304    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:19.193304    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:19.196093    5712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:19.196093    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:19.196093    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:19.196093    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:19.196093    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:19.196093    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:19 GMT
	I0318 12:44:19.196594    5712 round_trippers.go:580]     Audit-Id: 314e0cd0-9003-4a0b-9ff8-150072c7c15c
	I0318 12:44:19.196594    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:19.196826    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:19.684415    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:19.684415    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:19.684415    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:19.684415    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:19.689310    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:19.689310    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:19.689310    5712 round_trippers.go:580]     Audit-Id: d9991c05-4c4e-42ab-8450-d123f48c2fde
	I0318 12:44:19.689310    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:19.689310    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:19.689310    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:19.689310    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:19.689310    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:19 GMT
	I0318 12:44:19.689310    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:19.690473    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:19.690473    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:19.690473    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:19.690473    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:19.696541    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:44:19.696607    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:19.696607    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:19.696607    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:19.696607    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:19.696607    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:19 GMT
	I0318 12:44:19.696607    5712 round_trippers.go:580]     Audit-Id: 7166bc9d-fda4-460c-b473-b52bf353b794
	I0318 12:44:19.696607    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:19.696607    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:20.184737    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:20.184737    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:20.184737    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:20.184855    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:20.189772    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:20.189772    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:20.189772    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:20.189772    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:20.189772    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:20 GMT
	I0318 12:44:20.189772    5712 round_trippers.go:580]     Audit-Id: 05ee64e2-c995-4ea2-94d3-d68d214e36bd
	I0318 12:44:20.189772    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:20.189772    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:20.190405    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:20.191046    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:20.191046    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:20.191046    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:20.191046    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:20.193639    5712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:20.193639    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:20.193639    5712 round_trippers.go:580]     Audit-Id: e1a9c45c-df43-4395-aaa9-80f01d696884
	I0318 12:44:20.193639    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:20.194626    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:20.194626    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:20.194626    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:20.194626    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:20 GMT
	I0318 12:44:20.195054    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:20.685980    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:20.686195    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:20.686195    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:20.686195    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:20.690175    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:20.690175    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:20.690175    5712 round_trippers.go:580]     Audit-Id: 944456e4-665e-4ca2-b14c-6c08b49da556
	I0318 12:44:20.690175    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:20.690258    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:20.690258    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:20.690258    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:20.690258    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:20 GMT
	I0318 12:44:20.690493    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:20.691284    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:20.691284    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:20.691284    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:20.691284    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:20.697025    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:44:20.697025    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:20.697025    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:20 GMT
	I0318 12:44:20.697025    5712 round_trippers.go:580]     Audit-Id: 8e9daf81-187e-42ba-9516-554a19108696
	I0318 12:44:20.697025    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:20.697025    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:20.697025    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:20.697025    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:20.697880    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:20.697906    5712 pod_ready.go:102] pod "coredns-5dd5756b68-fgn7v" in "kube-system" namespace has status "Ready":"False"
	I0318 12:44:21.186002    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:21.186002    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:21.186002    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:21.186002    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:21.190030    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:21.190030    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:21.190030    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:21.190030    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:21.190030    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:21.190030    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:21 GMT
	I0318 12:44:21.190030    5712 round_trippers.go:580]     Audit-Id: 296a70f8-cd1d-4354-9ff9-3472574721e8
	I0318 12:44:21.190030    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:21.190030    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:21.191000    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:21.191000    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:21.191000    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:21.191000    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:21.194008    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:21.194008    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:21.194008    5712 round_trippers.go:580]     Audit-Id: ccf25fed-8468-4983-8188-e5585640997e
	I0318 12:44:21.194008    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:21.194008    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:21.194008    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:21.194008    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:21.194008    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:21 GMT
	I0318 12:44:21.194994    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:21.692322    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:21.692420    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:21.692420    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:21.692420    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:21.697256    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:21.697325    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:21.697325    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:21.697325    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:21.697325    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:21 GMT
	I0318 12:44:21.697325    5712 round_trippers.go:580]     Audit-Id: fa195ac6-3ca5-49be-aeed-5b87e2bed243
	I0318 12:44:21.697325    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:21.697325    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:21.697650    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:21.698534    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:21.698534    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:21.698534    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:21.698534    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:21.702603    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:21.702708    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:21.702708    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:21.702708    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:21.702708    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:21.702708    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:21.702708    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:21 GMT
	I0318 12:44:21.702708    5712 round_trippers.go:580]     Audit-Id: 5bea0762-0250-41b5-ac29-f243420a227e
	I0318 12:44:21.702919    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:22.190369    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:22.190369    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:22.190369    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:22.190369    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:22.195258    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:22.195258    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:22.195258    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:22 GMT
	I0318 12:44:22.195258    5712 round_trippers.go:580]     Audit-Id: 68c04b8c-0662-45cb-8226-2b8fa49c7a37
	I0318 12:44:22.195258    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:22.195258    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:22.195479    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:22.195479    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:22.195564    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:22.196754    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:22.196754    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:22.196754    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:22.196754    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:22.199867    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:22.200351    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:22.200351    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:22.200351    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:22.200351    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:22 GMT
	I0318 12:44:22.200351    5712 round_trippers.go:580]     Audit-Id: f0cd9ef8-d1e9-4635-8d0e-39ee443ce231
	I0318 12:44:22.200351    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:22.200351    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:22.200932    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:22.689351    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:22.689527    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:22.689527    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:22.689527    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:22.694615    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:44:22.695693    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:22.695693    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:22.695693    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:22.695737    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:22 GMT
	I0318 12:44:22.695737    5712 round_trippers.go:580]     Audit-Id: 51c77479-7ef8-4549-bdcf-29cdea895d25
	I0318 12:44:22.695737    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:22.695737    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:22.695737    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:22.696791    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:22.696791    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:22.696791    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:22.696791    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:22.699978    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:22.700875    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:22.700875    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:22 GMT
	I0318 12:44:22.700875    5712 round_trippers.go:580]     Audit-Id: 8656091a-bace-4ade-968e-ed7071a2bb7f
	I0318 12:44:22.700875    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:22.700925    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:22.700925    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:22.700925    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:22.701006    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:22.701617    5712 pod_ready.go:102] pod "coredns-5dd5756b68-fgn7v" in "kube-system" namespace has status "Ready":"False"
	I0318 12:44:23.191048    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:23.191048    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:23.191048    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:23.191048    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:23.195874    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:23.195874    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:23.195874    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:23.195874    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:23.195874    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:23.195874    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:23.195874    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:23 GMT
	I0318 12:44:23.195874    5712 round_trippers.go:580]     Audit-Id: ca521754-fc97-47a7-9195-3cd38de7268b
	I0318 12:44:23.195874    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:23.196943    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:23.196943    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:23.196943    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:23.196943    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:23.203500    5712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:44:23.203500    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:23.203500    5712 round_trippers.go:580]     Audit-Id: a1eee80f-b7c3-49f8-b7ce-0e095d26368d
	I0318 12:44:23.203500    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:23.203500    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:23.203500    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:23.203500    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:23.203500    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:23 GMT
	I0318 12:44:23.203500    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:23.681225    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:23.681225    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:23.681329    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:23.681329    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:23.683987    5712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:23.684629    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:23.684629    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:23.684629    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:23 GMT
	I0318 12:44:23.684629    5712 round_trippers.go:580]     Audit-Id: 8c0646e4-5030-4a93-8604-cf8c53d9b492
	I0318 12:44:23.684629    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:23.684629    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:23.684629    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:23.684949    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:23.685800    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:23.685800    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:23.685800    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:23.685800    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:23.689144    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:23.689144    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:23.689144    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:23.689144    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:23.689144    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:23.689144    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:23 GMT
	I0318 12:44:23.689144    5712 round_trippers.go:580]     Audit-Id: d97ae3b5-349d-4383-b52f-89febb47de97
	I0318 12:44:23.689144    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:23.689683    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:24.183512    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:24.187181    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:24.187181    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:24.187181    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:24.194438    5712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 12:44:24.194438    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:24.194438    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:24 GMT
	I0318 12:44:24.194438    5712 round_trippers.go:580]     Audit-Id: 365ca239-6035-4726-bb21-40a23ef0f551
	I0318 12:44:24.194438    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:24.194438    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:24.194438    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:24.194438    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:24.195105    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:24.195891    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:24.195891    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:24.195891    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:24.195891    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:24.198527    5712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:24.198527    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:24.198527    5712 round_trippers.go:580]     Audit-Id: 75c2da6b-08aa-49fc-adc0-619837128160
	I0318 12:44:24.198527    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:24.198527    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:24.198527    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:24.198527    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:24.198527    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:24 GMT
	I0318 12:44:24.199345    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:24.687835    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:24.687835    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:24.687835    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:24.687835    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:24.692441    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:24.692922    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:24.692922    5712 round_trippers.go:580]     Audit-Id: 7a0307ff-7b11-411e-a738-e35606bccc8f
	I0318 12:44:24.692922    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:24.692922    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:24.692922    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:24.692922    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:24.692922    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:24 GMT
	I0318 12:44:24.693236    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:24.694028    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:24.694082    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:24.694082    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:24.694082    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:24.697874    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:24.697874    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:24.697874    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:24 GMT
	I0318 12:44:24.697874    5712 round_trippers.go:580]     Audit-Id: 1de05b38-ea53-4391-954f-0c681c3d1b89
	I0318 12:44:24.697874    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:24.697874    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:24.697874    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:24.697874    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:24.697874    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:25.185296    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:25.185596    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:25.185596    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:25.185670    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:25.189231    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:25.189231    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:25.189231    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:25.189231    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:25 GMT
	I0318 12:44:25.189231    5712 round_trippers.go:580]     Audit-Id: 0770fdd1-d8a4-42be-aa42-264d5f9306d2
	I0318 12:44:25.189231    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:25.189231    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:25.189231    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:25.190314    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:25.191217    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:25.191269    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:25.191269    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:25.191269    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:25.194875    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:25.194875    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:25.194875    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:25 GMT
	I0318 12:44:25.194992    5712 round_trippers.go:580]     Audit-Id: e94c15c0-f098-4d72-bc69-d6fa71412fd0
	I0318 12:44:25.194992    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:25.194992    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:25.194992    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:25.194992    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:25.195053    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:25.195788    5712 pod_ready.go:102] pod "coredns-5dd5756b68-fgn7v" in "kube-system" namespace has status "Ready":"False"
	I0318 12:44:25.687613    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:25.687690    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:25.687690    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:25.687690    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:25.691052    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:25.692017    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:25.692085    5712 round_trippers.go:580]     Audit-Id: 5770b70f-5661-4482-aea1-14759485817d
	I0318 12:44:25.692085    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:25.692085    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:25.692085    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:25.692085    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:25.692085    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:25 GMT
	I0318 12:44:25.692085    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:25.692995    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:25.693067    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:25.693067    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:25.693067    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:25.696293    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:25.696293    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:25.696573    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:25.696633    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:25 GMT
	I0318 12:44:25.696633    5712 round_trippers.go:580]     Audit-Id: 6c69f8a8-44ea-48a3-98a3-425f3910b939
	I0318 12:44:25.696633    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:25.696633    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:25.696633    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:25.697314    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:26.186485    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:26.186560    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:26.186560    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:26.186560    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:26.189912    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:26.190966    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:26.190966    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:26.190966    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:26.190966    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:26 GMT
	I0318 12:44:26.190966    5712 round_trippers.go:580]     Audit-Id: ba95d9cb-04a3-4e2b-b6ea-4d0392e390cd
	I0318 12:44:26.190966    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:26.190966    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:26.191316    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:26.192037    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:26.192037    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:26.192037    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:26.192037    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:26.194649    5712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:26.194649    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:26.194649    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:26.194649    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:26.194649    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:26.194649    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:26.194649    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:26 GMT
	I0318 12:44:26.194649    5712 round_trippers.go:580]     Audit-Id: b2e83234-7314-4055-8529-007cc48facc0
	I0318 12:44:26.196345    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:26.688382    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:26.688382    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:26.688382    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:26.688382    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:26.691771    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:26.691771    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:26.691771    5712 round_trippers.go:580]     Audit-Id: 1df16aa0-1c0b-47c7-b307-379de7306aba
	I0318 12:44:26.691771    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:26.692341    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:26.692341    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:26.692341    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:26.692341    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:26 GMT
	I0318 12:44:26.692341    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:26.693311    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:26.693311    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:26.693311    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:26.693311    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:26.697630    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:26.697630    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:26.697814    5712 round_trippers.go:580]     Audit-Id: 8a85b6ef-88f8-4373-97fa-7b45bd3345f0
	I0318 12:44:26.697814    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:26.697814    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:26.697814    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:26.697814    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:26.697814    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:26 GMT
	I0318 12:44:26.698213    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:27.190331    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:27.190331    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:27.190331    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:27.190331    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:27.194805    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:27.195296    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:27.195296    5712 round_trippers.go:580]     Audit-Id: e4b40c8e-375d-470d-ab22-2b071736004f
	I0318 12:44:27.195296    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:27.195296    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:27.195296    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:27.195296    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:27.195296    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:27 GMT
	I0318 12:44:27.195744    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:27.196186    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:27.196186    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:27.196186    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:27.196186    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:27.199968    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:27.199968    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:27.199968    5712 round_trippers.go:580]     Audit-Id: e2b1872b-a14e-4414-a484-ce21db4a68c0
	I0318 12:44:27.199968    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:27.199968    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:27.199968    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:27.199968    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:27.199968    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:27 GMT
	I0318 12:44:27.201362    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:27.201839    5712 pod_ready.go:102] pod "coredns-5dd5756b68-fgn7v" in "kube-system" namespace has status "Ready":"False"
	I0318 12:44:27.688272    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:27.688542    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:27.688542    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:27.688542    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:27.692785    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:27.692837    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:27.692837    5712 round_trippers.go:580]     Audit-Id: 86c94cb3-990f-4e2f-93b0-fd291a1af69e
	I0318 12:44:27.692837    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:27.692837    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:27.692880    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:27.692880    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:27.692880    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:27 GMT
	I0318 12:44:27.693596    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:27.693745    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:27.694322    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:27.694322    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:27.694322    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:27.698129    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:27.698129    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:27.698556    5712 round_trippers.go:580]     Audit-Id: 374e4ac8-2526-4f20-a555-bdbcc8c5cc52
	I0318 12:44:27.698556    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:27.698556    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:27.698556    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:27.698556    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:27.698556    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:27 GMT
	I0318 12:44:27.698556    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:28.189054    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:28.189165    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:28.189165    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:28.189165    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:28.193679    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:28.193679    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:28.193679    5712 round_trippers.go:580]     Audit-Id: f446158c-2311-4125-ab37-1e5aa0f36123
	I0318 12:44:28.193679    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:28.193679    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:28.193852    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:28.193852    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:28.193852    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:28 GMT
	I0318 12:44:28.194247    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:28.195028    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:28.195028    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:28.195102    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:28.195102    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:28.198286    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:28.198286    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:28.198286    5712 round_trippers.go:580]     Audit-Id: 151bd438-a533-4683-9f1b-c1b7d337b5ff
	I0318 12:44:28.198286    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:28.198286    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:28.198996    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:28.198996    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:28.198996    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:28 GMT
	I0318 12:44:28.199379    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:28.690475    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:28.690475    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:28.690475    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:28.690475    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:28.694382    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:28.694382    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:28.695026    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:28 GMT
	I0318 12:44:28.695026    5712 round_trippers.go:580]     Audit-Id: 39426202-2074-49a8-9309-df2b267ebbdd
	I0318 12:44:28.695026    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:28.695026    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:28.695026    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:28.695026    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:28.695203    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:28.696056    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:28.696056    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:28.696108    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:28.696108    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:28.717239    5712 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0318 12:44:28.717239    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:28.717624    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:28 GMT
	I0318 12:44:28.717624    5712 round_trippers.go:580]     Audit-Id: 5f536d96-bee0-44ac-8da4-a1c6465d7479
	I0318 12:44:28.717624    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:28.717624    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:28.717624    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:28.717624    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:28.717947    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:29.191330    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:29.194978    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:29.194978    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:29.194978    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:29.200376    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:44:29.201410    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:29.201410    5712 round_trippers.go:580]     Audit-Id: 33130013-2b8a-4157-ab96-7c712701556c
	I0318 12:44:29.201443    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:29.201443    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:29.201443    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:29.201443    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:29.201443    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:29 GMT
	I0318 12:44:29.201847    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:29.202639    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:29.202704    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:29.202704    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:29.202704    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:29.205630    5712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:29.206029    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:29.206029    5712 round_trippers.go:580]     Audit-Id: 92ce3129-989c-4664-a7af-232965a1f1c1
	I0318 12:44:29.206029    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:29.206029    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:29.206029    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:29.206029    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:29.206029    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:29 GMT
	I0318 12:44:29.206029    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:29.206828    5712 pod_ready.go:102] pod "coredns-5dd5756b68-fgn7v" in "kube-system" namespace has status "Ready":"False"
	I0318 12:44:29.690504    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:29.690504    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:29.690504    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:29.690504    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:29.695038    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:29.695038    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:29.695537    5712 round_trippers.go:580]     Audit-Id: 090178bd-ce42-4dc9-b27b-705727259d7e
	I0318 12:44:29.695537    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:29.695537    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:29.695537    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:29.695537    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:29.695537    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:29 GMT
	I0318 12:44:29.696280    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:29.697440    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:29.697440    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:29.697440    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:29.697440    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:29.703505    5712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:44:29.703505    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:29.703505    5712 round_trippers.go:580]     Audit-Id: b04fcd21-8d69-4295-9e68-878a9948b0e4
	I0318 12:44:29.703505    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:29.703505    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:29.703505    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:29.703505    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:29.703505    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:29 GMT
	I0318 12:44:29.704493    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:30.191740    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:30.191979    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:30.191979    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:30.191979    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:30.195803    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:30.195803    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:30.196340    5712 round_trippers.go:580]     Audit-Id: c29edaca-1b98-4123-a622-957e56360b5f
	I0318 12:44:30.196340    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:30.196340    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:30.196340    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:30.196340    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:30.196472    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:30 GMT
	I0318 12:44:30.196588    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:30.197195    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:30.197195    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:30.197195    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:30.197195    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:30.201030    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:30.201030    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:30.201109    5712 round_trippers.go:580]     Audit-Id: 0eab9f9f-514f-44d8-a8c2-a86204bc2150
	I0318 12:44:30.201109    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:30.201109    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:30.201109    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:30.201109    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:30.201109    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:30 GMT
	I0318 12:44:30.201329    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:30.693710    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:30.693710    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:30.693775    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:30.693775    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:30.702563    5712 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 12:44:30.702563    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:30.702828    5712 round_trippers.go:580]     Audit-Id: 77fdf92d-f89d-44a9-a05b-7867d180cf1f
	I0318 12:44:30.702828    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:30.702828    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:30.702828    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:30.702828    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:30.702828    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:30 GMT
	I0318 12:44:30.703079    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:30.703922    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:30.703979    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:30.703979    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:30.703979    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:30.708169    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:30.708169    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:30.708169    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:30.708169    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:30 GMT
	I0318 12:44:30.708169    5712 round_trippers.go:580]     Audit-Id: 675effea-ad05-42ea-b002-7080a2628de4
	I0318 12:44:30.708169    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:30.708169    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:30.708390    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:30.708799    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:31.192382    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:31.192638    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:31.192638    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:31.192638    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:31.196436    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:31.196940    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:31.196940    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:31 GMT
	I0318 12:44:31.196940    5712 round_trippers.go:580]     Audit-Id: 05f011bb-0606-4d55-aa31-22f1ba5d74a9
	I0318 12:44:31.196940    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:31.196940    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:31.196940    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:31.196940    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:31.196940    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:31.197867    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:31.197923    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:31.197923    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:31.197923    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:31.201214    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:31.201214    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:31.201214    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:31.201214    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:31.201214    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:31.201517    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:31 GMT
	I0318 12:44:31.201517    5712 round_trippers.go:580]     Audit-Id: 5992ea35-60a3-46ee-8e9e-7321539abc76
	I0318 12:44:31.201517    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:31.201926    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:31.692932    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:31.693033    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:31.693033    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:31.693033    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:31.697758    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:31.697940    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:31.697940    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:31.698024    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:31.698024    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:31.698024    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:31 GMT
	I0318 12:44:31.698024    5712 round_trippers.go:580]     Audit-Id: c55e7335-af75-4dec-b634-eb6eec4557da
	I0318 12:44:31.698024    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:31.698305    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:31.699123    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:31.699123    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:31.699123    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:31.699123    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:31.702003    5712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:31.702513    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:31.702550    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:31.702580    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:31 GMT
	I0318 12:44:31.702580    5712 round_trippers.go:580]     Audit-Id: 459647b1-86c1-45a8-a345-1c404f6135f6
	I0318 12:44:31.702580    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:31.702580    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:31.702580    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:31.702580    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:31.703314    5712 pod_ready.go:102] pod "coredns-5dd5756b68-fgn7v" in "kube-system" namespace has status "Ready":"False"
	I0318 12:44:32.194214    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:32.194214    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:32.194214    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:32.194214    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:32.198608    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:32.198608    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:32.198608    5712 round_trippers.go:580]     Audit-Id: c6750ae0-b2f8-4d95-a5e2-1794b85fdfa9
	I0318 12:44:32.198608    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:32.198608    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:32.198608    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:32.198608    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:32.198608    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:32 GMT
	I0318 12:44:32.199230    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:32.199937    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:32.199937    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:32.199937    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:32.199937    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:32.203521    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:32.203521    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:32.203635    5712 round_trippers.go:580]     Audit-Id: acb32f20-87d5-4042-b31b-2ba23efd38d3
	I0318 12:44:32.203635    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:32.203635    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:32.203635    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:32.203635    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:32.203635    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:32 GMT
	I0318 12:44:32.204306    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:32.680219    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:32.680219    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:32.680308    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:32.680308    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:32.685079    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:32.685264    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:32.685264    5712 round_trippers.go:580]     Audit-Id: 1798e15c-0678-4db5-80ee-5182b0ae30f7
	I0318 12:44:32.685264    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:32.685264    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:32.685264    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:32.685264    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:32.685264    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:32 GMT
	I0318 12:44:32.685525    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:32.686404    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:32.686471    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:32.686471    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:32.686471    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:32.689074    5712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:32.689074    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:32.690020    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:32.690060    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:32.690060    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:32 GMT
	I0318 12:44:32.690060    5712 round_trippers.go:580]     Audit-Id: 90749611-f9ba-4355-8857-6e4c035fbd2e
	I0318 12:44:32.690060    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:32.690060    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:32.690491    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:33.194838    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:33.194838    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:33.194838    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:33.194838    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:33.199930    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:44:33.199930    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:33.199930    5712 round_trippers.go:580]     Audit-Id: 3d30d2ef-3a3f-46d0-a78c-959c1dba9928
	I0318 12:44:33.199930    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:33.199930    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:33.199930    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:33.199930    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:33.199930    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:33 GMT
	I0318 12:44:33.199930    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:33.201121    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:33.201236    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:33.201236    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:33.201236    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:33.204447    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:33.204728    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:33.204728    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:33.204728    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:33.204728    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:33 GMT
	I0318 12:44:33.204728    5712 round_trippers.go:580]     Audit-Id: 7577c64a-ff66-4949-8a72-a79b5e15d602
	I0318 12:44:33.204728    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:33.204728    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:33.204728    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:33.693743    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:33.693743    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:33.693743    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:33.693743    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:33.699262    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:44:33.699262    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:33.699262    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:33.699262    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:33.699262    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:33.699262    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:33.699262    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:33 GMT
	I0318 12:44:33.699262    5712 round_trippers.go:580]     Audit-Id: d8692647-16fb-4411-997d-4417241c566d
	I0318 12:44:33.699262    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:33.700442    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:33.700442    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:33.700522    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:33.700522    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:33.704429    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:33.705187    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:33.705187    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:33.705187    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:33.705187    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:33.705187    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:33 GMT
	I0318 12:44:33.705187    5712 round_trippers.go:580]     Audit-Id: c0c3137e-e27c-4083-9ab2-c8913db71ef9
	I0318 12:44:33.705187    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:33.705187    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:33.706096    5712 pod_ready.go:102] pod "coredns-5dd5756b68-fgn7v" in "kube-system" namespace has status "Ready":"False"
	I0318 12:44:34.180798    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:34.183822    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:34.183822    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:34.183822    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:34.187601    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:34.188131    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:34.188208    5712 round_trippers.go:580]     Audit-Id: 2b69c6c9-ece5-46a7-96ed-20d93b99d56b
	I0318 12:44:34.188208    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:34.188208    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:34.188208    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:34.188208    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:34.188208    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:34 GMT
	I0318 12:44:34.188208    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:34.188959    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:34.188959    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:34.188959    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:34.188959    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:34.193430    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:34.193646    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:34.193646    5712 round_trippers.go:580]     Audit-Id: c121a1db-2043-415d-8ee3-bec023bf88e8
	I0318 12:44:34.193646    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:34.193646    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:34.193646    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:34.193646    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:34.193646    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:34 GMT
	I0318 12:44:34.193646    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:34.681413    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:34.681413    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:34.681413    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:34.681413    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:34.685995    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:34.686919    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:34.686919    5712 round_trippers.go:580]     Audit-Id: 1aca597f-aeed-49ce-911b-1f0a40618d1d
	I0318 12:44:34.686919    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:34.686974    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:34.686974    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:34.686974    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:34.686974    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:34 GMT
	I0318 12:44:34.687201    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:34.688012    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:34.688073    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:34.688130    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:34.688130    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:34.690813    5712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:34.691451    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:34.691451    5712 round_trippers.go:580]     Audit-Id: 75e02a0b-d062-403c-b9b8-85ac7b141d25
	I0318 12:44:34.691451    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:34.691451    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:34.691451    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:34.691451    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:34.691524    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:34 GMT
	I0318 12:44:34.691829    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:35.183360    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:35.183360    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:35.183360    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:35.183360    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:35.190039    5712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:44:35.190191    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:35.190191    5712 round_trippers.go:580]     Audit-Id: 0fcd2b6e-e3e2-432c-9c01-456f86970ea2
	I0318 12:44:35.190191    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:35.190191    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:35.190191    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:35.190191    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:35.190191    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:35 GMT
	I0318 12:44:35.190546    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:35.191235    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:35.191235    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:35.191235    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:35.191235    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:35.200958    5712 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0318 12:44:35.200958    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:35.200958    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:35 GMT
	I0318 12:44:35.200958    5712 round_trippers.go:580]     Audit-Id: 4f08f624-3192-4a83-8d17-2430c00cdb13
	I0318 12:44:35.201224    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:35.201224    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:35.201224    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:35.201224    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:35.201455    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:35.692096    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:35.692096    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:35.692096    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:35.692096    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:35.696093    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:35.696093    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:35.696093    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:35 GMT
	I0318 12:44:35.696093    5712 round_trippers.go:580]     Audit-Id: 41df05e5-0a39-4364-afea-7bf683432ecd
	I0318 12:44:35.696093    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:35.696093    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:35.696093    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:35.696093    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:35.696093    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:35.697093    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:35.697093    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:35.697093    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:35.697093    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:35.701124    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:35.701124    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:35.701124    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:35.701124    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:35.701191    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:35.701191    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:35.701191    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:35 GMT
	I0318 12:44:35.701191    5712 round_trippers.go:580]     Audit-Id: b64a76c6-18c4-4946-8b07-13cdb8e11a54
	I0318 12:44:35.701649    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:36.185176    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:36.185176    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:36.185176    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:36.185176    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:36.192062    5712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:44:36.192356    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:36.192356    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:36 GMT
	I0318 12:44:36.192356    5712 round_trippers.go:580]     Audit-Id: 15d44108-2e7f-4f78-877b-ff6a558f6a23
	I0318 12:44:36.192356    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:36.192356    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:36.192356    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:36.192356    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:36.192626    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:36.192783    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:36.193314    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:36.193314    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:36.193366    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:36.197405    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:36.197405    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:36.197405    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:36 GMT
	I0318 12:44:36.197718    5712 round_trippers.go:580]     Audit-Id: 176965b7-6a70-4a66-9075-a021c56abc5a
	I0318 12:44:36.197718    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:36.197718    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:36.197718    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:36.197718    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:36.197914    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:36.198444    5712 pod_ready.go:102] pod "coredns-5dd5756b68-fgn7v" in "kube-system" namespace has status "Ready":"False"
	I0318 12:44:36.680855    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:36.680937    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:36.681011    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:36.681011    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:36.685221    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:36.685221    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:36.685221    5712 round_trippers.go:580]     Audit-Id: e8b15e15-b4c7-4495-9882-65f5e53b717c
	I0318 12:44:36.685584    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:36.685584    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:36.685584    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:36.685584    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:36.685584    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:36 GMT
	I0318 12:44:36.685887    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:36.686605    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:36.686605    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:36.686605    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:36.686605    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:36.692101    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:44:36.692101    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:36.692101    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:36.692101    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:36.692101    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:36.692101    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:36.692101    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:36 GMT
	I0318 12:44:36.692101    5712 round_trippers.go:580]     Audit-Id: dffd16db-5eee-4d28-8446-40cfe18bab83
	I0318 12:44:36.692676    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:37.181620    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:37.181620    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:37.181620    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:37.181620    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:37.190921    5712 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0318 12:44:37.191328    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:37.191328    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:37.191328    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:37.191328    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:37 GMT
	I0318 12:44:37.191328    5712 round_trippers.go:580]     Audit-Id: 0cb190c9-721e-45d3-9526-648b92746f64
	I0318 12:44:37.191328    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:37.191328    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:37.192878    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"2048","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6723 chars]
	I0318 12:44:37.193736    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:37.193770    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:37.193770    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:37.193770    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:37.198616    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:37.198616    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:37.198616    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:37 GMT
	I0318 12:44:37.198616    5712 round_trippers.go:580]     Audit-Id: ab65a1d6-c291-43b8-ad03-c5fedcaae94f
	I0318 12:44:37.198616    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:37.198616    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:37.198616    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:37.198896    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:37.199278    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:37.682572    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:37.682689    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:37.682689    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:37.682689    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:37.690172    5712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 12:44:37.690172    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:37.690172    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:37.690172    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:37 GMT
	I0318 12:44:37.690172    5712 round_trippers.go:580]     Audit-Id: c8fc5adf-36e3-4185-8151-237941e1352a
	I0318 12:44:37.690172    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:37.690172    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:37.690172    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:37.690172    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"2054","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6494 chars]
	I0318 12:44:37.691407    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:37.691407    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:37.691407    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:37.691407    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:37.695001    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:37.695001    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:37.695001    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:37.695001    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:37.695001    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:37.695001    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:37.695001    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:37 GMT
	I0318 12:44:37.695001    5712 round_trippers.go:580]     Audit-Id: 36831683-9c6e-41d3-946a-ea967d026971
	I0318 12:44:37.695970    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:37.695970    5712 pod_ready.go:92] pod "coredns-5dd5756b68-fgn7v" in "kube-system" namespace has status "Ready":"True"
	I0318 12:44:37.696589    5712 pod_ready.go:81] duration metric: took 26.016683s for pod "coredns-5dd5756b68-fgn7v" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:37.696589    5712 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-642600" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:37.696646    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-642600
	I0318 12:44:37.696784    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:37.696852    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:37.696852    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:37.700210    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:37.700210    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:37.700210    5712 round_trippers.go:580]     Audit-Id: 58cbf2d5-df20-4b65-8ace-75dbc407c605
	I0318 12:44:37.700210    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:37.700210    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:37.700210    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:37.700210    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:37.700210    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:37 GMT
	I0318 12:44:37.700210    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-642600","namespace":"kube-system","uid":"6f0ca14e-af4b-4442-8a48-28b69c699976","resourceVersion":"1972","creationTimestamp":"2024-03-18T12:43:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.148.129:2379","kubernetes.io/config.hash":"d5f09afee1a6ef36657c1ae3335ddda6","kubernetes.io/config.mirror":"d5f09afee1a6ef36657c1ae3335ddda6","kubernetes.io/config.seen":"2024-03-18T12:43:24.228249982Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:43:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 5873 chars]
	I0318 12:44:37.701132    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:37.701132    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:37.701132    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:37.701132    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:37.705261    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:37.705261    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:37.705261    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:37.705261    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:37.705261    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:37.705261    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:37 GMT
	I0318 12:44:37.705261    5712 round_trippers.go:580]     Audit-Id: 0e8f5552-c2ac-460d-9018-0c509cb0f965
	I0318 12:44:37.705261    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:37.705261    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:37.705847    5712 pod_ready.go:92] pod "etcd-multinode-642600" in "kube-system" namespace has status "Ready":"True"
	I0318 12:44:37.705847    5712 pod_ready.go:81] duration metric: took 9.2581ms for pod "etcd-multinode-642600" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:37.705847    5712 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-642600" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:37.705847    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-642600
	I0318 12:44:37.705847    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:37.705847    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:37.705847    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:37.709027    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:37.709027    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:37.709134    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:37 GMT
	I0318 12:44:37.709134    5712 round_trippers.go:580]     Audit-Id: fc189f3a-7220-4f47-ba34-3f2c56a72300
	I0318 12:44:37.709134    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:37.709134    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:37.709134    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:37.709134    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:37.709251    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-642600","namespace":"kube-system","uid":"ab8e6b8b-cbac-4c90-8f57-9af2760ced9c","resourceVersion":"1944","creationTimestamp":"2024-03-18T12:43:31Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.25.148.129:8443","kubernetes.io/config.hash":"624de65f019baf96d4a0e2fb6064e413","kubernetes.io/config.mirror":"624de65f019baf96d4a0e2fb6064e413","kubernetes.io/config.seen":"2024-03-18T12:43:24.228255882Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:43:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7409 chars]
	I0318 12:44:37.710305    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:37.710376    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:37.710376    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:37.710376    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:37.712672    5712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:37.712672    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:37.712672    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:37.713034    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:37.713034    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:37.713034    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:37 GMT
	I0318 12:44:37.713034    5712 round_trippers.go:580]     Audit-Id: 67a82f34-0039-46a5-9e2a-433ae69903b9
	I0318 12:44:37.713034    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:37.713375    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:37.713736    5712 pod_ready.go:92] pod "kube-apiserver-multinode-642600" in "kube-system" namespace has status "Ready":"True"
	I0318 12:44:37.713736    5712 pod_ready.go:81] duration metric: took 7.8892ms for pod "kube-apiserver-multinode-642600" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:37.713736    5712 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-642600" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:37.713995    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-642600
	I0318 12:44:37.714092    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:37.714092    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:37.714157    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:37.716468    5712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:37.716468    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:37.716468    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:37.716468    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:37.716468    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:37 GMT
	I0318 12:44:37.716468    5712 round_trippers.go:580]     Audit-Id: 84381428-69d2-4e01-9fc2-08bb6c474a3a
	I0318 12:44:37.716468    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:37.716468    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:37.717414    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-642600","namespace":"kube-system","uid":"1dd2a576-c5a0-44e5-b194-545e8b18962c","resourceVersion":"1976","creationTimestamp":"2024-03-18T12:18:51Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a1608bc774d0b3e96e1b6fbbded5cb52","kubernetes.io/config.mirror":"a1608bc774d0b3e96e1b6fbbded5cb52","kubernetes.io/config.seen":"2024-03-18T12:18:50.896437006Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:18:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7179 chars]
	I0318 12:44:37.717414    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:37.717414    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:37.717414    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:37.717414    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:37.720290    5712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:37.721075    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:37.721075    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:37.721141    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:37.721141    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:37.721141    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:37.721141    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:37 GMT
	I0318 12:44:37.721268    5712 round_trippers.go:580]     Audit-Id: 1960dc73-4e58-4d0a-a800-653714b542b6
	I0318 12:44:37.721519    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:37.722106    5712 pod_ready.go:92] pod "kube-controller-manager-multinode-642600" in "kube-system" namespace has status "Ready":"True"
	I0318 12:44:37.722227    5712 pod_ready.go:81] duration metric: took 8.3696ms for pod "kube-controller-manager-multinode-642600" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:37.722227    5712 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4dg79" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:37.722437    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4dg79
	I0318 12:44:37.722510    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:37.722510    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:37.722562    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:37.726868    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:37.726868    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:37.726868    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:37.726868    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:37.726868    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:37 GMT
	I0318 12:44:37.726868    5712 round_trippers.go:580]     Audit-Id: dd7b971e-e716-44f2-8184-4375f57c8a3b
	I0318 12:44:37.726868    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:37.726868    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:37.727356    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4dg79","generateName":"kube-proxy-","namespace":"kube-system","uid":"449242c2-ad12-4da5-b339-3be7ab8a9b16","resourceVersion":"1871","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"158ddb85-85d3-4864-bdec-d4555b6c7434","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"158ddb85-85d3-4864-bdec-d4555b6c7434\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5743 chars]
	I0318 12:44:37.728352    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:37.728352    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:37.728352    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:37.728352    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:37.731347    5712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:37.731347    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:37.731347    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:37.731619    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:37.731619    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:37.731619    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:37 GMT
	I0318 12:44:37.731619    5712 round_trippers.go:580]     Audit-Id: cfb72b6a-73ad-456c-82a4-8b95e610267a
	I0318 12:44:37.731619    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:37.731934    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:37.732330    5712 pod_ready.go:92] pod "kube-proxy-4dg79" in "kube-system" namespace has status "Ready":"True"
	I0318 12:44:37.732330    5712 pod_ready.go:81] duration metric: took 10.046ms for pod "kube-proxy-4dg79" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:37.732330    5712 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-khbjt" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:37.887622    5712 request.go:629] Waited for 155.2912ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/kube-proxy-khbjt
	I0318 12:44:37.887887    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/kube-proxy-khbjt
	I0318 12:44:37.887951    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:37.887951    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:37.887951    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:37.893530    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:44:37.893530    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:37.893619    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:37 GMT
	I0318 12:44:37.893619    5712 round_trippers.go:580]     Audit-Id: b2dd9516-d217-4a89-88bc-6060a711ccac
	I0318 12:44:37.893639    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:37.893639    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:37.893639    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:37.893639    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:37.894001    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-khbjt","generateName":"kube-proxy-","namespace":"kube-system","uid":"594efa46-7e30-40e6-92dd-9c9c80bc787a","resourceVersion":"1825","creationTimestamp":"2024-03-18T12:27:09Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"158ddb85-85d3-4864-bdec-d4555b6c7434","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:27:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"158ddb85-85d3-4864-bdec-d4555b6c7434\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5771 chars]
	I0318 12:44:38.091944    5712 request.go:629] Waited for 197.5062ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.129:8443/api/v1/nodes/multinode-642600-m03
	I0318 12:44:38.092032    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600-m03
	I0318 12:44:38.092323    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:38.092405    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:38.092405    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:38.095865    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:38.096657    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:38.096657    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:38.096657    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:38 GMT
	I0318 12:44:38.096657    5712 round_trippers.go:580]     Audit-Id: e2072c20-1fa0-4ee1-b265-59b232f06eb6
	I0318 12:44:38.096657    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:38.096657    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:38.096657    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:38.096927    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m03","uid":"e9bc5257-e8c0-493d-a533-c2a8a832d45e","resourceVersion":"1992","creationTimestamp":"2024-03-18T12:38:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_38_47_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:38:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4400 chars]
	I0318 12:44:38.097511    5712 pod_ready.go:97] node "multinode-642600-m03" hosting pod "kube-proxy-khbjt" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-642600-m03" has status "Ready":"Unknown"
	I0318 12:44:38.097659    5712 pod_ready.go:81] duration metric: took 365.3266ms for pod "kube-proxy-khbjt" in "kube-system" namespace to be "Ready" ...
	E0318 12:44:38.097659    5712 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-642600-m03" hosting pod "kube-proxy-khbjt" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-642600-m03" has status "Ready":"Unknown"
	I0318 12:44:38.097659    5712 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vts9f" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:38.295471    5712 request.go:629] Waited for 197.3092ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vts9f
	I0318 12:44:38.295660    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vts9f
	I0318 12:44:38.295660    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:38.295660    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:38.295660    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:38.300505    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:38.300505    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:38.300505    5712 round_trippers.go:580]     Audit-Id: f8a44f84-dfc4-47d2-93b8-d3f1effc3787
	I0318 12:44:38.300505    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:38.300505    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:38.300505    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:38.300505    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:38.301381    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:38 GMT
	I0318 12:44:38.302269    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vts9f","generateName":"kube-proxy-","namespace":"kube-system","uid":"9545be8f-07fd-49dd-99bd-e9e976e65e7b","resourceVersion":"2032","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"158ddb85-85d3-4864-bdec-d4555b6c7434","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"158ddb85-85d3-4864-bdec-d4555b6c7434\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5771 chars]
	I0318 12:44:38.482633    5712 request.go:629] Waited for 179.5233ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.129:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:44:38.482860    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:44:38.483005    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:38.483005    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:38.483005    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:38.487439    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:38.487439    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:38.487711    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:38.487711    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:38.487711    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:38.487711    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:38.487711    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:38 GMT
	I0318 12:44:38.487711    5712 round_trippers.go:580]     Audit-Id: 4426fb33-3c5e-443c-9ac3-5aac8865b391
	I0318 12:44:38.488292    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"2040","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4583 chars]
	I0318 12:44:38.488767    5712 pod_ready.go:97] node "multinode-642600-m02" hosting pod "kube-proxy-vts9f" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-642600-m02" has status "Ready":"Unknown"
	I0318 12:44:38.488837    5712 pod_ready.go:81] duration metric: took 391.1756ms for pod "kube-proxy-vts9f" in "kube-system" namespace to be "Ready" ...
	E0318 12:44:38.488837    5712 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-642600-m02" hosting pod "kube-proxy-vts9f" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-642600-m02" has status "Ready":"Unknown"
	I0318 12:44:38.488837    5712 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-642600" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:38.685928    5712 request.go:629] Waited for 196.8223ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-642600
	I0318 12:44:38.686345    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-642600
	I0318 12:44:38.686345    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:38.686345    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:38.686460    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:38.691410    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:38.691410    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:38.691410    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:38.691410    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:38.691410    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:38 GMT
	I0318 12:44:38.691410    5712 round_trippers.go:580]     Audit-Id: 820e668c-1131-49a1-b2d2-fc96c4698517
	I0318 12:44:38.691410    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:38.691410    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:38.691410    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-642600","namespace":"kube-system","uid":"52e29d3b-d6e9-4109-916d-74123a2ab190","resourceVersion":"1955","creationTimestamp":"2024-03-18T12:18:51Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cf50844b540be8ed0b3e767db413ac8f","kubernetes.io/config.mirror":"cf50844b540be8ed0b3e767db413ac8f","kubernetes.io/config.seen":"2024-03-18T12:18:50.896438106Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:18:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4909 chars]
	I0318 12:44:38.889238    5712 request.go:629] Waited for 196.7125ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:38.889449    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:38.889449    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:38.889449    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:38.889541    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:38.893888    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:38.893888    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:38.893888    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:38.893888    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:38.893888    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:38.893888    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:38 GMT
	I0318 12:44:38.893888    5712 round_trippers.go:580]     Audit-Id: 3276ce98-f6ba-49b9-8f1f-3889cbb8318b
	I0318 12:44:38.893888    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:38.893888    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:38.894599    5712 pod_ready.go:92] pod "kube-scheduler-multinode-642600" in "kube-system" namespace has status "Ready":"True"
	I0318 12:44:38.894599    5712 pod_ready.go:81] duration metric: took 405.6364ms for pod "kube-scheduler-multinode-642600" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:38.894599    5712 pod_ready.go:38] duration metric: took 27.2268168s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 12:44:38.894599    5712 api_server.go:52] waiting for apiserver process to appear ...
	I0318 12:44:38.906171    5712 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 12:44:38.935847    5712 command_runner.go:130] > a48a6d310b86
	I0318 12:44:38.935993    5712 logs.go:276] 1 containers: [a48a6d310b86]
	I0318 12:44:38.946472    5712 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 12:44:38.970744    5712 command_runner.go:130] > 8e7911b58c58
	I0318 12:44:38.971709    5712 logs.go:276] 1 containers: [8e7911b58c58]
	I0318 12:44:38.982095    5712 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 12:44:39.006125    5712 command_runner.go:130] > fcf17db92b35
	I0318 12:44:39.006125    5712 command_runner.go:130] > e81f1d2fdb36
	I0318 12:44:39.006125    5712 logs.go:276] 2 containers: [fcf17db92b35 e81f1d2fdb36]
	I0318 12:44:39.016676    5712 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 12:44:39.047010    5712 command_runner.go:130] > bd1e4f4d262e
	I0318 12:44:39.047113    5712 command_runner.go:130] > 47777d4c0b90
	I0318 12:44:39.047113    5712 logs.go:276] 2 containers: [bd1e4f4d262e 47777d4c0b90]
	I0318 12:44:39.057608    5712 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 12:44:39.083081    5712 command_runner.go:130] > 575b41a3a85a
	I0318 12:44:39.083081    5712 command_runner.go:130] > 4bbad08fe59a
	I0318 12:44:39.083790    5712 logs.go:276] 2 containers: [575b41a3a85a 4bbad08fe59a]
	I0318 12:44:39.093122    5712 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 12:44:39.118727    5712 command_runner.go:130] > 14ae9398d33b
	I0318 12:44:39.118727    5712 command_runner.go:130] > a54be4436901
	I0318 12:44:39.118827    5712 logs.go:276] 2 containers: [14ae9398d33b a54be4436901]
	I0318 12:44:39.129760    5712 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 12:44:39.159256    5712 command_runner.go:130] > 9fec05a61d2a
	I0318 12:44:39.159256    5712 command_runner.go:130] > 5cf42651cb21
	I0318 12:44:39.160324    5712 logs.go:276] 2 containers: [9fec05a61d2a 5cf42651cb21]
	I0318 12:44:39.160401    5712 logs.go:123] Gathering logs for kindnet [5cf42651cb21] ...
	I0318 12:44:39.160496    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf42651cb21"
	I0318 12:44:39.200666    5712 command_runner.go:130] ! I0318 12:29:43.278241       1 main.go:227] handling current node
	I0318 12:44:39.200992    5712 command_runner.go:130] ! I0318 12:29:43.278258       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.200992    5712 command_runner.go:130] ! I0318 12:29:43.278267       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:29:43.279034       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:29:43.279112       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:29:53.290788       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:29:53.290919       1 main.go:227] handling current node
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:29:53.290935       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:29:53.290944       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:29:53.291443       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:29:53.291608       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:03.307097       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:03.307405       1 main.go:227] handling current node
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:03.307624       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:03.307713       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:03.307989       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:03.308095       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:13.315412       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:13.315512       1 main.go:227] handling current node
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:13.315528       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:13.315537       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:13.316187       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:13.316277       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:23.331223       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:23.331328       1 main.go:227] handling current node
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:23.331344       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:23.331352       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:23.331895       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:23.332071       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:33.338821       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:33.338848       1 main.go:227] handling current node
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:33.338860       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:33.338866       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:33.339004       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:33.339017       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:43.354041       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:43.354126       1 main.go:227] handling current node
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:43.354142       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:43.354153       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:43.354280       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:43.354293       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:53.362056       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:53.362198       1 main.go:227] handling current node
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:53.362230       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.201629    5712 command_runner.go:130] ! I0318 12:30:53.362239       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.201629    5712 command_runner.go:130] ! I0318 12:30:53.362887       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.201629    5712 command_runner.go:130] ! I0318 12:30:53.363194       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.201714    5712 command_runner.go:130] ! I0318 12:31:03.378995       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.201714    5712 command_runner.go:130] ! I0318 12:31:03.379039       1 main.go:227] handling current node
	I0318 12:44:39.201714    5712 command_runner.go:130] ! I0318 12:31:03.379096       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.201714    5712 command_runner.go:130] ! I0318 12:31:03.379108       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.201794    5712 command_runner.go:130] ! I0318 12:31:03.379432       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.201794    5712 command_runner.go:130] ! I0318 12:31:03.379450       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.201794    5712 command_runner.go:130] ! I0318 12:31:13.392082       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.201794    5712 command_runner.go:130] ! I0318 12:31:13.392188       1 main.go:227] handling current node
	I0318 12:44:39.201861    5712 command_runner.go:130] ! I0318 12:31:13.392224       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.201861    5712 command_runner.go:130] ! I0318 12:31:13.392249       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.201861    5712 command_runner.go:130] ! I0318 12:31:13.392820       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.201961    5712 command_runner.go:130] ! I0318 12:31:13.392974       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.201961    5712 command_runner.go:130] ! I0318 12:31:23.402269       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.201994    5712 command_runner.go:130] ! I0318 12:31:23.402391       1 main.go:227] handling current node
	I0318 12:44:39.201994    5712 command_runner.go:130] ! I0318 12:31:23.402408       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.201994    5712 command_runner.go:130] ! I0318 12:31:23.402417       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.201994    5712 command_runner.go:130] ! I0318 12:31:23.403188       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.202056    5712 command_runner.go:130] ! I0318 12:31:23.403223       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.202080    5712 command_runner.go:130] ! I0318 12:31:33.413396       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:31:33.413577       1 main.go:227] handling current node
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:31:33.413639       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:31:33.413654       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:31:33.414293       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:31:33.414437       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:31:43.424274       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:31:43.424320       1 main.go:227] handling current node
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:31:43.424332       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:31:43.424339       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:31:43.424591       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:31:43.424608       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:31:53.433473       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:31:53.433591       1 main.go:227] handling current node
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:31:53.433607       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:31:53.433615       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:31:53.433851       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:31:53.433959       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:32:03.443363       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:32:03.443411       1 main.go:227] handling current node
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:32:03.443424       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:32:03.443450       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:32:03.444602       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:32:03.445390       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:32:13.460166       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:32:13.460215       1 main.go:227] handling current node
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:32:13.460229       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:32:13.460237       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:32:13.460679       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:32:13.460697       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:32:23.479958       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:32:23.480007       1 main.go:227] handling current node
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:32:23.480024       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:32:23.480032       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:32:23.480521       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:32:23.480578       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:32:33.491143       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:32:33.491190       1 main.go:227] handling current node
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:32:33.491204       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.202647    5712 command_runner.go:130] ! I0318 12:32:33.491211       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.202647    5712 command_runner.go:130] ! I0318 12:32:33.491340       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.202688    5712 command_runner.go:130] ! I0318 12:32:33.491369       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.202738    5712 command_runner.go:130] ! I0318 12:32:43.505355       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.202738    5712 command_runner.go:130] ! I0318 12:32:43.505474       1 main.go:227] handling current node
	I0318 12:44:39.202815    5712 command_runner.go:130] ! I0318 12:32:43.505490       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.202815    5712 command_runner.go:130] ! I0318 12:32:43.505498       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.202815    5712 command_runner.go:130] ! I0318 12:32:43.505666       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.202815    5712 command_runner.go:130] ! I0318 12:32:43.505696       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.202898    5712 command_runner.go:130] ! I0318 12:32:53.513310       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.202898    5712 command_runner.go:130] ! I0318 12:32:53.513340       1 main.go:227] handling current node
	I0318 12:44:39.202898    5712 command_runner.go:130] ! I0318 12:32:53.513350       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.202898    5712 command_runner.go:130] ! I0318 12:32:53.513357       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.202898    5712 command_runner.go:130] ! I0318 12:32:53.513783       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.202986    5712 command_runner.go:130] ! I0318 12:32:53.513865       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.202986    5712 command_runner.go:130] ! I0318 12:33:03.527897       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.202986    5712 command_runner.go:130] ! I0318 12:33:03.528343       1 main.go:227] handling current node
	I0318 12:44:39.202986    5712 command_runner.go:130] ! I0318 12:33:03.528485       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.202986    5712 command_runner.go:130] ! I0318 12:33:03.528785       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.203075    5712 command_runner.go:130] ! I0318 12:33:03.529110       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.203097    5712 command_runner.go:130] ! I0318 12:33:03.529205       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.203097    5712 command_runner.go:130] ! I0318 12:33:13.538048       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.203097    5712 command_runner.go:130] ! I0318 12:33:13.538183       1 main.go:227] handling current node
	I0318 12:44:39.203097    5712 command_runner.go:130] ! I0318 12:33:13.538222       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.203163    5712 command_runner.go:130] ! I0318 12:33:13.538317       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.203163    5712 command_runner.go:130] ! I0318 12:33:13.538750       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.203223    5712 command_runner.go:130] ! I0318 12:33:13.538888       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.203223    5712 command_runner.go:130] ! I0318 12:33:23.555771       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:23.555820       1 main.go:227] handling current node
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:23.555895       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:23.555905       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:23.556511       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:23.556780       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:33.566023       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:33.566190       1 main.go:227] handling current node
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:33.566208       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:33.566217       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:33.566931       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:33.567031       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:43.581332       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:43.581382       1 main.go:227] handling current node
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:43.581449       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:43.581482       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:43.582063       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:43.582166       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:53.588426       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:53.588602       1 main.go:227] handling current node
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:53.588619       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:53.588628       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:53.588919       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:53.588937       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:34:03.604902       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:34:03.605007       1 main.go:227] handling current node
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:34:03.605023       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:34:03.605032       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:34:03.605612       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:34:03.605696       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:34:13.618369       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:34:13.618488       1 main.go:227] handling current node
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:34:13.618585       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:34:13.618604       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:34:13.618738       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:34:13.618747       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:23.626772       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:23.626887       1 main.go:227] handling current node
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:23.626903       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:23.626911       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:23.627415       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:23.627448       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:33.644122       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:33.644215       1 main.go:227] handling current node
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:33.644233       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:33.644757       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:33.645128       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:33.645240       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:43.661684       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:43.661731       1 main.go:227] handling current node
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:43.661744       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:43.661751       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:43.662532       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:43.662645       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:53.676649       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:53.677242       1 main.go:227] handling current node
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:53.677518       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:53.677631       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:53.677873       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:53.677905       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:03.685328       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:03.685457       1 main.go:227] handling current node
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:03.685474       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:03.685483       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:03.685861       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:03.686001       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:13.702673       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:13.702782       1 main.go:227] handling current node
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:13.702801       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:13.703456       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:13.703827       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:13.703864       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:23.711167       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:23.711370       1 main.go:227] handling current node
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:23.711388       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:23.711398       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:23.712127       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:23.712222       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:33.724041       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:33.724810       1 main.go:227] handling current node
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:33.724973       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:33.725045       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:33.725458       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:33.725875       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:43.740216       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:43.740493       1 main.go:227] handling current node
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:43.740511       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:43.740520       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:43.741453       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:43.741584       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:53.748632       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:53.749163       1 main.go:227] handling current node
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:53.749285       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:53.749498       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:53.749815       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:53.749904       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:36:03.765208       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:36:03.765326       1 main.go:227] handling current node
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:36:03.765343       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:36:03.765351       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:36:03.765883       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:36:03.766028       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:36:13.775221       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:36:13.775396       1 main.go:227] handling current node
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:36:13.775430       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:36:13.775502       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:36:13.776058       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:36:13.776177       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:36:23.790073       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:36:23.790179       1 main.go:227] handling current node
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:36:23.790195       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:36:23.790207       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:36:23.790761       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:36:23.790798       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:36:33.800116       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:36:33.800240       1 main.go:227] handling current node
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:36:33.800256       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:36:33.800265       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:36:33.800837       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:36:33.800858       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:36:43.817961       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:36:43.818115       1 main.go:227] handling current node
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:36:43.818132       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:36:43.818146       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:36:43.818537       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:36:43.818661       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:36:53.827340       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:36:53.827385       1 main.go:227] handling current node
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:36:53.827398       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:36:53.827406       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:36:53.827787       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:36:53.827885       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:03.840761       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:03.840837       1 main.go:227] handling current node
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:03.840851       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:03.840859       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:03.841285       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:03.841319       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:13.848127       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:13.848174       1 main.go:227] handling current node
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:13.848188       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:13.848195       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:13.848630       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:13.848646       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:23.863745       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:23.863916       1 main.go:227] handling current node
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:23.863950       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:23.863996       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:23.864419       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:23.864510       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:33.876214       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:33.876331       1 main.go:227] handling current node
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:33.876347       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:33.876355       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:33.877021       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:33.877100       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:43.886399       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:43.886544       1 main.go:227] handling current node
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:43.886626       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:43.886636       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:43.886872       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:43.886890       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:53.903761       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:53.903845       1 main.go:227] handling current node
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:53.903871       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:53.903880       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:53.905033       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:53.905181       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.205415    5712 command_runner.go:130] ! I0318 12:38:03.919532       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.205415    5712 command_runner.go:130] ! I0318 12:38:03.919783       1 main.go:227] handling current node
	I0318 12:44:39.205415    5712 command_runner.go:130] ! I0318 12:38:03.919840       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.205415    5712 command_runner.go:130] ! I0318 12:38:03.919894       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.205415    5712 command_runner.go:130] ! I0318 12:38:03.920221       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.205415    5712 command_runner.go:130] ! I0318 12:38:03.920390       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.205518    5712 command_runner.go:130] ! I0318 12:38:13.927894       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.205518    5712 command_runner.go:130] ! I0318 12:38:13.928004       1 main.go:227] handling current node
	I0318 12:44:39.205518    5712 command_runner.go:130] ! I0318 12:38:13.928022       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.205518    5712 command_runner.go:130] ! I0318 12:38:13.928031       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.205518    5712 command_runner.go:130] ! I0318 12:38:13.928232       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.205601    5712 command_runner.go:130] ! I0318 12:38:13.928269       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.205601    5712 command_runner.go:130] ! I0318 12:38:23.943692       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.205601    5712 command_runner.go:130] ! I0318 12:38:23.943780       1 main.go:227] handling current node
	I0318 12:44:39.205601    5712 command_runner.go:130] ! I0318 12:38:23.943795       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.205601    5712 command_runner.go:130] ! I0318 12:38:23.943804       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.205601    5712 command_runner.go:130] ! I0318 12:38:23.944523       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.205676    5712 command_runner.go:130] ! I0318 12:38:23.944596       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.205676    5712 command_runner.go:130] ! I0318 12:38:33.952000       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.205676    5712 command_runner.go:130] ! I0318 12:38:33.952098       1 main.go:227] handling current node
	I0318 12:44:39.205676    5712 command_runner.go:130] ! I0318 12:38:33.952114       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.205753    5712 command_runner.go:130] ! I0318 12:38:33.952123       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.205753    5712 command_runner.go:130] ! I0318 12:38:33.952466       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.205753    5712 command_runner.go:130] ! I0318 12:38:33.952503       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.205753    5712 command_runner.go:130] ! I0318 12:38:43.965979       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.205753    5712 command_runner.go:130] ! I0318 12:38:43.966101       1 main.go:227] handling current node
	I0318 12:44:39.205753    5712 command_runner.go:130] ! I0318 12:38:43.966117       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.205844    5712 command_runner.go:130] ! I0318 12:38:43.966125       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.205844    5712 command_runner.go:130] ! I0318 12:38:53.989210       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.205844    5712 command_runner.go:130] ! I0318 12:38:53.989308       1 main.go:227] handling current node
	I0318 12:44:39.205844    5712 command_runner.go:130] ! I0318 12:38:53.989322       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.205844    5712 command_runner.go:130] ! I0318 12:38:53.989373       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.205844    5712 command_runner.go:130] ! I0318 12:38:53.989864       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:39.205844    5712 command_runner.go:130] ! I0318 12:38:53.989957       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:39.205844    5712 command_runner.go:130] ! I0318 12:38:53.990028       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.25.157.200 Flags: [] Table: 0} 
	I0318 12:44:39.205844    5712 command_runner.go:130] ! I0318 12:39:03.996429       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.205844    5712 command_runner.go:130] ! I0318 12:39:03.996598       1 main.go:227] handling current node
	I0318 12:44:39.205977    5712 command_runner.go:130] ! I0318 12:39:03.996614       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.205977    5712 command_runner.go:130] ! I0318 12:39:03.996623       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.205977    5712 command_runner.go:130] ! I0318 12:39:03.996739       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:39.205977    5712 command_runner.go:130] ! I0318 12:39:03.996753       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:39.205977    5712 command_runner.go:130] ! I0318 12:39:14.008318       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.206050    5712 command_runner.go:130] ! I0318 12:39:14.008384       1 main.go:227] handling current node
	I0318 12:44:39.206050    5712 command_runner.go:130] ! I0318 12:39:14.008398       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.206050    5712 command_runner.go:130] ! I0318 12:39:14.008405       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.206050    5712 command_runner.go:130] ! I0318 12:39:14.009080       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:39.206115    5712 command_runner.go:130] ! I0318 12:39:14.009179       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:39.206115    5712 command_runner.go:130] ! I0318 12:39:24.016154       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.206171    5712 command_runner.go:130] ! I0318 12:39:24.016315       1 main.go:227] handling current node
	I0318 12:44:39.206171    5712 command_runner.go:130] ! I0318 12:39:24.016330       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.206209    5712 command_runner.go:130] ! I0318 12:39:24.016338       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.206209    5712 command_runner.go:130] ! I0318 12:39:24.016842       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:39.206209    5712 command_runner.go:130] ! I0318 12:39:24.016875       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:39.206209    5712 command_runner.go:130] ! I0318 12:39:34.029061       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.206281    5712 command_runner.go:130] ! I0318 12:39:34.029159       1 main.go:227] handling current node
	I0318 12:44:39.206305    5712 command_runner.go:130] ! I0318 12:39:34.029175       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:39:34.029184       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:39:34.030103       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:39:34.030216       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:39:44.037921       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:39:44.037960       1 main.go:227] handling current node
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:39:44.037972       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:39:44.037981       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:39:44.038243       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:39:44.038318       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:39:54.057786       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:39:54.058021       1 main.go:227] handling current node
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:39:54.058100       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:39:54.058189       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:39:54.058376       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:39:54.058478       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:40:04.067119       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:40:04.067262       1 main.go:227] handling current node
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:40:04.067280       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:40:04.067289       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:40:04.067742       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:40:04.067846       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:40:14.082426       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:40:14.082921       1 main.go:227] handling current node
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:40:14.082946       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:40:14.082956       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:40:14.083174       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:40:14.083247       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:40:24.098060       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:40:24.098161       1 main.go:227] handling current node
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:40:24.098178       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:40:24.098187       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:40:24.098316       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:40:24.098324       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:40:34.335103       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:40:34.335169       1 main.go:227] handling current node
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:40:34.335185       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.206880    5712 command_runner.go:130] ! I0318 12:40:34.335192       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.206880    5712 command_runner.go:130] ! I0318 12:40:34.335470       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:39.206880    5712 command_runner.go:130] ! I0318 12:40:34.335488       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:39.206880    5712 command_runner.go:130] ! I0318 12:40:44.342962       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.206880    5712 command_runner.go:130] ! I0318 12:40:44.343122       1 main.go:227] handling current node
	I0318 12:44:39.206880    5712 command_runner.go:130] ! I0318 12:40:44.343139       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.206880    5712 command_runner.go:130] ! I0318 12:40:44.343148       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.206880    5712 command_runner.go:130] ! I0318 12:40:44.343738       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:39.206880    5712 command_runner.go:130] ! I0318 12:40:44.343780       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:39.227790    5712 logs.go:123] Gathering logs for dmesg ...
	I0318 12:44:39.228795    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 12:44:39.253573    5712 command_runner.go:130] > [Mar18 12:41] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0318 12:44:39.253573    5712 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0318 12:44:39.253573    5712 command_runner.go:130] > [  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0318 12:44:39.253573    5712 command_runner.go:130] > [  +0.129398] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0318 12:44:39.253573    5712 command_runner.go:130] > [  +0.023142] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0318 12:44:39.253573    5712 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0318 12:44:39.253573    5712 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0318 12:44:39.253573    5712 command_runner.go:130] > [  +0.067111] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0318 12:44:39.253573    5712 command_runner.go:130] > [  +0.023049] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0318 12:44:39.253573    5712 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0318 12:44:39.253573    5712 command_runner.go:130] > [  +5.633479] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0318 12:44:39.253573    5712 command_runner.go:130] > [  +0.746575] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0318 12:44:39.253573    5712 command_runner.go:130] > [  +1.948336] systemd-fstab-generator[113]: Ignoring "noauto" option for root device
	I0318 12:44:39.253573    5712 command_runner.go:130] > [  +7.356358] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0318 12:44:39.253573    5712 command_runner.go:130] > [  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0318 12:44:39.253573    5712 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0318 12:44:39.253573    5712 command_runner.go:130] > [Mar18 12:42] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	I0318 12:44:39.253573    5712 command_runner.go:130] > [  +0.196447] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	I0318 12:44:39.253573    5712 command_runner.go:130] > [Mar18 12:43] systemd-fstab-generator[969]: Ignoring "noauto" option for root device
	I0318 12:44:39.253573    5712 command_runner.go:130] > [  +0.116812] kauditd_printk_skb: 73 callbacks suppressed
	I0318 12:44:39.253573    5712 command_runner.go:130] > [  +0.565179] systemd-fstab-generator[1008]: Ignoring "noauto" option for root device
	I0318 12:44:39.253573    5712 command_runner.go:130] > [  +0.224131] systemd-fstab-generator[1020]: Ignoring "noauto" option for root device
	I0318 12:44:39.253573    5712 command_runner.go:130] > [  +0.243543] systemd-fstab-generator[1034]: Ignoring "noauto" option for root device
	I0318 12:44:39.253573    5712 command_runner.go:130] > [  +2.986318] systemd-fstab-generator[1219]: Ignoring "noauto" option for root device
	I0318 12:44:39.254578    5712 command_runner.go:130] > [  +0.197212] systemd-fstab-generator[1231]: Ignoring "noauto" option for root device
	I0318 12:44:39.254578    5712 command_runner.go:130] > [  +0.228503] systemd-fstab-generator[1243]: Ignoring "noauto" option for root device
	I0318 12:44:39.254578    5712 command_runner.go:130] > [  +0.297734] systemd-fstab-generator[1258]: Ignoring "noauto" option for root device
	I0318 12:44:39.254578    5712 command_runner.go:130] > [  +0.969011] systemd-fstab-generator[1381]: Ignoring "noauto" option for root device
	I0318 12:44:39.254578    5712 command_runner.go:130] > [  +0.114690] kauditd_printk_skb: 205 callbacks suppressed
	I0318 12:44:39.254578    5712 command_runner.go:130] > [  +3.575437] systemd-fstab-generator[1516]: Ignoring "noauto" option for root device
	I0318 12:44:39.254578    5712 command_runner.go:130] > [  +1.537938] kauditd_printk_skb: 44 callbacks suppressed
	I0318 12:44:39.254675    5712 command_runner.go:130] > [  +6.654182] kauditd_printk_skb: 30 callbacks suppressed
	I0318 12:44:39.254675    5712 command_runner.go:130] > [  +4.384606] systemd-fstab-generator[2563]: Ignoring "noauto" option for root device
	I0318 12:44:39.254675    5712 command_runner.go:130] > [  +7.200668] kauditd_printk_skb: 70 callbacks suppressed
	I0318 12:44:39.256303    5712 logs.go:123] Gathering logs for kube-scheduler [bd1e4f4d262e] ...
	I0318 12:44:39.256303    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd1e4f4d262e"
	I0318 12:44:39.296399    5712 command_runner.go:130] ! I0318 12:43:27.649061       1 serving.go:348] Generated self-signed cert in-memory
	I0318 12:44:39.296399    5712 command_runner.go:130] ! W0318 12:43:30.548831       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0318 12:44:39.296399    5712 command_runner.go:130] ! W0318 12:43:30.549092       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 12:44:39.296399    5712 command_runner.go:130] ! W0318 12:43:30.549282       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0318 12:44:39.296399    5712 command_runner.go:130] ! W0318 12:43:30.549461       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 12:44:39.296891    5712 command_runner.go:130] ! I0318 12:43:30.613305       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0318 12:44:39.296923    5712 command_runner.go:130] ! I0318 12:43:30.613417       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:39.296923    5712 command_runner.go:130] ! I0318 12:43:30.618512       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 12:44:39.296923    5712 command_runner.go:130] ! I0318 12:43:30.619171       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0318 12:44:39.296923    5712 command_runner.go:130] ! I0318 12:43:30.619276       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 12:44:39.296923    5712 command_runner.go:130] ! I0318 12:43:30.620071       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 12:44:39.296923    5712 command_runner.go:130] ! I0318 12:43:30.720411       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 12:44:39.299846    5712 logs.go:123] Gathering logs for kube-scheduler [47777d4c0b90] ...
	I0318 12:44:39.299902    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47777d4c0b90"
	I0318 12:44:39.335743    5712 command_runner.go:130] ! I0318 12:18:43.828879       1 serving.go:348] Generated self-signed cert in-memory
	I0318 12:44:39.335809    5712 command_runner.go:130] ! W0318 12:18:46.562226       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0318 12:44:39.335809    5712 command_runner.go:130] ! W0318 12:18:46.562618       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 12:44:39.335809    5712 command_runner.go:130] ! W0318 12:18:46.562705       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0318 12:44:39.335809    5712 command_runner.go:130] ! W0318 12:18:46.562793       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 12:44:39.335809    5712 command_runner.go:130] ! I0318 12:18:46.615857       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0318 12:44:39.335809    5712 command_runner.go:130] ! I0318 12:18:46.615957       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:39.335809    5712 command_runner.go:130] ! I0318 12:18:46.622177       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 12:44:39.335809    5712 command_runner.go:130] ! I0318 12:18:46.622201       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 12:44:39.335809    5712 command_runner.go:130] ! I0318 12:18:46.625084       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0318 12:44:39.335809    5712 command_runner.go:130] ! I0318 12:18:46.625162       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 12:44:39.335809    5712 command_runner.go:130] ! W0318 12:18:46.631110       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 12:44:39.335809    5712 command_runner.go:130] ! E0318 12:18:46.631164       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 12:44:39.335809    5712 command_runner.go:130] ! W0318 12:18:46.634891       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 12:44:39.335809    5712 command_runner.go:130] ! E0318 12:18:46.634917       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 12:44:39.335809    5712 command_runner.go:130] ! W0318 12:18:46.636313       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0318 12:44:39.335809    5712 command_runner.go:130] ! E0318 12:18:46.638655       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0318 12:44:39.335809    5712 command_runner.go:130] ! W0318 12:18:46.636730       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:39.336392    5712 command_runner.go:130] ! E0318 12:18:46.639099       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:39.336448    5712 command_runner.go:130] ! W0318 12:18:46.636905       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:39.336483    5712 command_runner.go:130] ! E0318 12:18:46.639254       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:39.336483    5712 command_runner.go:130] ! W0318 12:18:46.636986       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:39.336483    5712 command_runner.go:130] ! E0318 12:18:46.639495       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:39.336587    5712 command_runner.go:130] ! W0318 12:18:46.641683       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 12:44:39.336587    5712 command_runner.go:130] ! E0318 12:18:46.641953       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 12:44:39.336587    5712 command_runner.go:130] ! W0318 12:18:46.642236       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 12:44:39.336587    5712 command_runner.go:130] ! E0318 12:18:46.642375       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 12:44:39.336844    5712 command_runner.go:130] ! W0318 12:18:46.642673       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 12:44:39.336844    5712 command_runner.go:130] ! W0318 12:18:46.646073       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 12:44:39.336844    5712 command_runner.go:130] ! E0318 12:18:46.647270       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 12:44:39.336844    5712 command_runner.go:130] ! W0318 12:18:46.646147       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 12:44:39.336844    5712 command_runner.go:130] ! E0318 12:18:46.647534       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 12:44:39.336844    5712 command_runner.go:130] ! W0318 12:18:46.646208       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:39.336844    5712 command_runner.go:130] ! E0318 12:18:46.647719       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:39.336844    5712 command_runner.go:130] ! W0318 12:18:46.646271       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 12:44:39.336844    5712 command_runner.go:130] ! E0318 12:18:46.647738       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 12:44:39.336844    5712 command_runner.go:130] ! W0318 12:18:46.646322       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 12:44:39.336844    5712 command_runner.go:130] ! E0318 12:18:46.647752       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 12:44:39.336844    5712 command_runner.go:130] ! E0318 12:18:46.647915       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 12:44:39.336844    5712 command_runner.go:130] ! W0318 12:18:46.650301       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 12:44:39.336844    5712 command_runner.go:130] ! E0318 12:18:46.650528       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 12:44:39.336844    5712 command_runner.go:130] ! W0318 12:18:47.471960       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:39.336844    5712 command_runner.go:130] ! E0318 12:18:47.472093       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:39.336844    5712 command_runner.go:130] ! W0318 12:18:47.540921       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 12:44:39.336844    5712 command_runner.go:130] ! E0318 12:18:47.541368       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 12:44:39.337419    5712 command_runner.go:130] ! W0318 12:18:47.545171       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 12:44:39.337468    5712 command_runner.go:130] ! E0318 12:18:47.546126       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 12:44:39.337503    5712 command_runner.go:130] ! W0318 12:18:47.563772       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 12:44:39.337503    5712 command_runner.go:130] ! E0318 12:18:47.563806       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 12:44:39.337503    5712 command_runner.go:130] ! W0318 12:18:47.597770       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 12:44:39.337503    5712 command_runner.go:130] ! E0318 12:18:47.597873       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 12:44:39.337503    5712 command_runner.go:130] ! W0318 12:18:47.684794       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 12:44:39.337503    5712 command_runner.go:130] ! E0318 12:18:47.685008       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 12:44:39.337503    5712 command_runner.go:130] ! W0318 12:18:47.685352       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:39.337503    5712 command_runner.go:130] ! E0318 12:18:47.685509       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:39.337503    5712 command_runner.go:130] ! W0318 12:18:47.840132       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 12:44:39.337503    5712 command_runner.go:130] ! E0318 12:18:47.840303       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 12:44:39.337503    5712 command_runner.go:130] ! W0318 12:18:47.879838       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 12:44:39.337503    5712 command_runner.go:130] ! E0318 12:18:47.880363       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 12:44:39.337503    5712 command_runner.go:130] ! W0318 12:18:47.906171       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:39.337503    5712 command_runner.go:130] ! E0318 12:18:47.906493       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:39.337503    5712 command_runner.go:130] ! W0318 12:18:48.059997       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 12:44:39.337503    5712 command_runner.go:130] ! E0318 12:18:48.060049       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 12:44:39.337503    5712 command_runner.go:130] ! W0318 12:18:48.096160       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:39.337503    5712 command_runner.go:130] ! E0318 12:18:48.096304       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:39.338031    5712 command_runner.go:130] ! W0318 12:18:48.096504       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 12:44:39.338080    5712 command_runner.go:130] ! E0318 12:18:48.096662       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 12:44:39.338080    5712 command_runner.go:130] ! W0318 12:18:48.133175       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 12:44:39.338080    5712 command_runner.go:130] ! E0318 12:18:48.133469       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 12:44:39.338080    5712 command_runner.go:130] ! W0318 12:18:48.135066       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0318 12:44:39.338080    5712 command_runner.go:130] ! E0318 12:18:48.135196       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0318 12:44:39.338080    5712 command_runner.go:130] ! I0318 12:18:50.022459       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 12:44:39.338080    5712 command_runner.go:130] ! E0318 12:40:51.995231       1 run.go:74] "command failed" err="finished without leader elect"
	I0318 12:44:39.350356    5712 logs.go:123] Gathering logs for kube-proxy [575b41a3a85a] ...
	I0318 12:44:39.350356    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 575b41a3a85a"
	I0318 12:44:39.385101    5712 command_runner.go:130] ! I0318 12:43:33.336778       1 server_others.go:69] "Using iptables proxy"
	I0318 12:44:39.385156    5712 command_runner.go:130] ! I0318 12:43:33.550433       1 node.go:141] Successfully retrieved node IP: 172.25.148.129
	I0318 12:44:39.385156    5712 command_runner.go:130] ! I0318 12:43:33.793084       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 12:44:39.385156    5712 command_runner.go:130] ! I0318 12:43:33.793109       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 12:44:39.385217    5712 command_runner.go:130] ! I0318 12:43:33.796954       1 server_others.go:152] "Using iptables Proxier"
	I0318 12:44:39.385253    5712 command_runner.go:130] ! I0318 12:43:33.798936       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 12:44:39.385253    5712 command_runner.go:130] ! I0318 12:43:33.800347       1 server.go:846] "Version info" version="v1.28.4"
	I0318 12:44:39.385301    5712 command_runner.go:130] ! I0318 12:43:33.800569       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:39.385301    5712 command_runner.go:130] ! I0318 12:43:33.803648       1 config.go:188] "Starting service config controller"
	I0318 12:44:39.385301    5712 command_runner.go:130] ! I0318 12:43:33.805156       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 12:44:39.385301    5712 command_runner.go:130] ! I0318 12:43:33.805421       1 config.go:97] "Starting endpoint slice config controller"
	I0318 12:44:39.385372    5712 command_runner.go:130] ! I0318 12:43:33.805584       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 12:44:39.385372    5712 command_runner.go:130] ! I0318 12:43:33.808628       1 config.go:315] "Starting node config controller"
	I0318 12:44:39.385372    5712 command_runner.go:130] ! I0318 12:43:33.808736       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 12:44:39.385436    5712 command_runner.go:130] ! I0318 12:43:33.905580       1 shared_informer.go:318] Caches are synced for service config
	I0318 12:44:39.385436    5712 command_runner.go:130] ! I0318 12:43:33.907041       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 12:44:39.385436    5712 command_runner.go:130] ! I0318 12:43:33.909416       1 shared_informer.go:318] Caches are synced for node config
	I0318 12:44:39.386346    5712 logs.go:123] Gathering logs for kube-proxy [4bbad08fe59a] ...
	I0318 12:44:39.386895    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbad08fe59a"
	I0318 12:44:39.417696    5712 command_runner.go:130] ! I0318 12:19:04.970720       1 server_others.go:69] "Using iptables proxy"
	I0318 12:44:39.417696    5712 command_runner.go:130] ! I0318 12:19:04.997380       1 node.go:141] Successfully retrieved node IP: 172.25.151.112
	I0318 12:44:39.417696    5712 command_runner.go:130] ! I0318 12:19:05.099028       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 12:44:39.417696    5712 command_runner.go:130] ! I0318 12:19:05.099065       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 12:44:39.417696    5712 command_runner.go:130] ! I0318 12:19:05.102885       1 server_others.go:152] "Using iptables Proxier"
	I0318 12:44:39.417696    5712 command_runner.go:130] ! I0318 12:19:05.103013       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 12:44:39.417696    5712 command_runner.go:130] ! I0318 12:19:05.103652       1 server.go:846] "Version info" version="v1.28.4"
	I0318 12:44:39.417696    5712 command_runner.go:130] ! I0318 12:19:05.103704       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:39.417696    5712 command_runner.go:130] ! I0318 12:19:05.105505       1 config.go:188] "Starting service config controller"
	I0318 12:44:39.417696    5712 command_runner.go:130] ! I0318 12:19:05.106093       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 12:44:39.417696    5712 command_runner.go:130] ! I0318 12:19:05.106131       1 config.go:97] "Starting endpoint slice config controller"
	I0318 12:44:39.417696    5712 command_runner.go:130] ! I0318 12:19:05.106138       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 12:44:39.417696    5712 command_runner.go:130] ! I0318 12:19:05.107424       1 config.go:315] "Starting node config controller"
	I0318 12:44:39.417696    5712 command_runner.go:130] ! I0318 12:19:05.107456       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 12:44:39.417696    5712 command_runner.go:130] ! I0318 12:19:05.206699       1 shared_informer.go:318] Caches are synced for service config
	I0318 12:44:39.417696    5712 command_runner.go:130] ! I0318 12:19:05.206811       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 12:44:39.417696    5712 command_runner.go:130] ! I0318 12:19:05.207857       1 shared_informer.go:318] Caches are synced for node config
	I0318 12:44:39.420657    5712 logs.go:123] Gathering logs for kube-controller-manager [14ae9398d33b] ...
	I0318 12:44:39.420709    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ae9398d33b"
	I0318 12:44:39.452685    5712 command_runner.go:130] ! I0318 12:43:27.406049       1 serving.go:348] Generated self-signed cert in-memory
	I0318 12:44:39.452685    5712 command_runner.go:130] ! I0318 12:43:29.733819       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0318 12:44:39.452685    5712 command_runner.go:130] ! I0318 12:43:29.734137       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:39.452685    5712 command_runner.go:130] ! I0318 12:43:29.737351       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 12:44:39.452809    5712 command_runner.go:130] ! I0318 12:43:29.737598       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 12:44:39.452809    5712 command_runner.go:130] ! I0318 12:43:29.739365       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0318 12:44:39.452809    5712 command_runner.go:130] ! I0318 12:43:29.740428       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 12:44:39.452809    5712 command_runner.go:130] ! I0318 12:43:32.581261       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0318 12:44:39.453395    5712 command_runner.go:130] ! I0318 12:43:32.597867       1 controllermanager.go:642] "Started controller" controller="persistentvolume-binder-controller"
	I0318 12:44:39.453897    5712 command_runner.go:130] ! I0318 12:43:32.602078       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0318 12:44:39.453952    5712 command_runner.go:130] ! I0318 12:43:32.602099       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0318 12:44:39.454168    5712 command_runner.go:130] ! I0318 12:43:32.605600       1 controllermanager.go:642] "Started controller" controller="persistentvolume-expander-controller"
	I0318 12:44:39.454220    5712 command_runner.go:130] ! I0318 12:43:32.605807       1 expand_controller.go:328] "Starting expand controller"
	I0318 12:44:39.454220    5712 command_runner.go:130] ! I0318 12:43:32.605957       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0318 12:44:39.454220    5712 command_runner.go:130] ! I0318 12:43:32.620725       1 controllermanager.go:642] "Started controller" controller="persistentvolume-protection-controller"
	I0318 12:44:39.454263    5712 command_runner.go:130] ! I0318 12:43:32.621286       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0318 12:44:39.454318    5712 command_runner.go:130] ! I0318 12:43:32.621374       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0318 12:44:39.454318    5712 command_runner.go:130] ! I0318 12:43:32.663010       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I0318 12:44:39.454318    5712 command_runner.go:130] ! I0318 12:43:32.663383       1 namespace_controller.go:197] "Starting namespace controller"
	I0318 12:44:39.454353    5712 command_runner.go:130] ! I0318 12:43:32.663451       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0318 12:44:39.454398    5712 command_runner.go:130] ! I0318 12:43:32.674431       1 controllermanager.go:642] "Started controller" controller="deployment-controller"
	I0318 12:44:39.454398    5712 command_runner.go:130] ! I0318 12:43:32.675030       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0318 12:44:39.454398    5712 command_runner.go:130] ! I0318 12:43:32.675045       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0318 12:44:39.454398    5712 command_runner.go:130] ! I0318 12:43:32.680220       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0318 12:44:39.454497    5712 command_runner.go:130] ! I0318 12:43:32.680236       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0318 12:44:39.454568    5712 command_runner.go:130] ! I0318 12:43:32.680266       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:39.454568    5712 command_runner.go:130] ! I0318 12:43:32.681919       1 shared_informer.go:318] Caches are synced for tokens
	I0318 12:44:39.454625    5712 command_runner.go:130] ! I0318 12:43:32.684132       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0318 12:44:39.454650    5712 command_runner.go:130] ! I0318 12:43:32.684147       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.684164       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.685811       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.685845       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.686123       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.687526       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.687845       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.687858       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.687918       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.691958       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.692673       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.696192       1 controllermanager.go:642] "Started controller" controller="bootstrap-signer-controller"
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.696622       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.701031       1 controllermanager.go:642] "Started controller" controller="token-cleaner-controller"
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.701415       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.701449       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.701458       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0318 12:44:39.454679    5712 command_runner.go:130] ! E0318 12:43:32.705162       1 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.705349       1 controllermanager.go:620] "Warning: skipping controller" controller="service-lb-controller"
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.705364       1 core.go:228] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.705376       1 controllermanager.go:620] "Warning: skipping controller" controller="node-route-controller"
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.750736       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.751361       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0318 12:44:39.454679    5712 command_runner.go:130] ! W0318 12:43:32.751515       1 shared_informer.go:593] resyncPeriod 19h34m1.540802039s is smaller than resyncCheckPeriod 20h12m46.622656472s and the informer has already started. Changing it to 20h12m46.622656472s
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.752012       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.752529       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0318 12:44:39.456045    5712 command_runner.go:130] ! I0318 12:43:32.752719       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0318 12:44:39.456127    5712 command_runner.go:130] ! I0318 12:43:32.752884       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0318 12:44:39.456361    5712 command_runner.go:130] ! I0318 12:43:32.753191       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0318 12:44:39.456623    5712 command_runner.go:130] ! I0318 12:43:32.753284       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0318 12:44:39.456684    5712 command_runner.go:130] ! I0318 12:43:32.753677       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0318 12:44:39.456684    5712 command_runner.go:130] ! I0318 12:43:32.753791       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0318 12:44:39.456727    5712 command_runner.go:130] ! I0318 12:43:32.753884       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0318 12:44:39.456727    5712 command_runner.go:130] ! I0318 12:43:32.754036       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0318 12:44:39.456727    5712 command_runner.go:130] ! I0318 12:43:32.754202       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0318 12:44:39.456727    5712 command_runner.go:130] ! I0318 12:43:32.754691       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0318 12:44:39.456727    5712 command_runner.go:130] ! I0318 12:43:32.755001       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0318 12:44:39.456727    5712 command_runner.go:130] ! I0318 12:43:32.755205       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0318 12:44:39.456727    5712 command_runner.go:130] ! I0318 12:43:32.755784       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0318 12:44:39.456727    5712 command_runner.go:130] ! I0318 12:43:32.755974       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0318 12:44:39.456863    5712 command_runner.go:130] ! I0318 12:43:32.756144       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0318 12:44:39.456912    5712 command_runner.go:130] ! I0318 12:43:32.756649       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0318 12:44:39.456912    5712 command_runner.go:130] ! I0318 12:43:32.756826       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0318 12:44:39.456947    5712 command_runner.go:130] ! I0318 12:43:32.757119       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 12:44:39.456947    5712 command_runner.go:130] ! I0318 12:43:32.757364       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0318 12:44:39.456980    5712 command_runner.go:130] ! I0318 12:43:32.757580       1 controllermanager.go:642] "Started controller" controller="resourcequota-controller"
	I0318 12:44:39.457001    5712 command_runner.go:130] ! E0318 12:43:32.773718       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0318 12:44:39.457001    5712 command_runner.go:130] ! I0318 12:43:32.773746       1 controllermanager.go:620] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0318 12:44:39.457035    5712 command_runner.go:130] ! I0318 12:43:32.786590       1 controllermanager.go:642] "Started controller" controller="ephemeral-volume-controller"
	I0318 12:44:39.457035    5712 command_runner.go:130] ! I0318 12:43:32.786978       1 controller.go:169] "Starting ephemeral volume controller"
	I0318 12:44:39.457067    5712 command_runner.go:130] ! I0318 12:43:32.787007       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0318 12:44:39.457067    5712 command_runner.go:130] ! I0318 12:43:32.795770       1 controllermanager.go:642] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0318 12:44:39.457094    5712 command_runner.go:130] ! I0318 12:43:32.798452       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0318 12:44:39.457094    5712 command_runner.go:130] ! I0318 12:43:32.798585       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0318 12:44:39.457094    5712 command_runner.go:130] ! I0318 12:43:32.801712       1 controllermanager.go:642] "Started controller" controller="serviceaccount-controller"
	I0318 12:44:39.457094    5712 command_runner.go:130] ! I0318 12:43:32.802261       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0318 12:44:39.457094    5712 command_runner.go:130] ! I0318 12:43:32.806063       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0318 12:44:39.457094    5712 command_runner.go:130] ! I0318 12:43:32.823560       1 controllermanager.go:642] "Started controller" controller="garbage-collector-controller"
	I0318 12:44:39.457094    5712 command_runner.go:130] ! I0318 12:43:32.823578       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0318 12:44:39.457094    5712 command_runner.go:130] ! I0318 12:43:32.823595       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 12:44:39.457094    5712 command_runner.go:130] ! I0318 12:43:32.823621       1 graph_builder.go:294] "Running" component="GraphBuilder"
	I0318 12:44:39.457094    5712 command_runner.go:130] ! I0318 12:43:32.833033       1 controllermanager.go:642] "Started controller" controller="daemonset-controller"
	I0318 12:44:39.457094    5712 command_runner.go:130] ! I0318 12:43:32.833480       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0318 12:44:39.457094    5712 command_runner.go:130] ! I0318 12:43:32.833494       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0318 12:44:39.457094    5712 command_runner.go:130] ! I0318 12:43:32.862160       1 node_lifecycle_controller.go:431] "Controller will reconcile labels"
	I0318 12:44:39.457094    5712 command_runner.go:130] ! I0318 12:43:32.862209       1 controllermanager.go:642] "Started controller" controller="node-lifecycle-controller"
	I0318 12:44:39.457094    5712 command_runner.go:130] ! I0318 12:43:32.862524       1 node_lifecycle_controller.go:465] "Sending events to api server"
	I0318 12:44:39.457703    5712 command_runner.go:130] ! I0318 12:43:32.862562       1 node_lifecycle_controller.go:476] "Starting node controller"
	I0318 12:44:39.457703    5712 command_runner.go:130] ! I0318 12:43:32.862573       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0318 12:44:39.457703    5712 command_runner.go:130] ! I0318 12:43:32.883369       1 controllermanager.go:642] "Started controller" controller="endpointslice-mirroring-controller"
	I0318 12:44:39.457703    5712 command_runner.go:130] ! I0318 12:43:32.886141       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0318 12:44:39.457764    5712 command_runner.go:130] ! I0318 12:43:32.886674       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0318 12:44:39.457764    5712 command_runner.go:130] ! I0318 12:43:32.896468       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I0318 12:44:39.457764    5712 command_runner.go:130] ! I0318 12:43:32.896951       1 stateful_set.go:161] "Starting stateful set controller"
	I0318 12:44:39.457764    5712 command_runner.go:130] ! I0318 12:43:32.897135       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0318 12:44:39.457816    5712 command_runner.go:130] ! I0318 12:43:32.900325       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0318 12:44:39.457816    5712 command_runner.go:130] ! I0318 12:43:32.900580       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0318 12:44:39.457816    5712 command_runner.go:130] ! I0318 12:43:32.903531       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0318 12:44:39.457816    5712 command_runner.go:130] ! I0318 12:43:32.917793       1 controllermanager.go:642] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0318 12:44:39.457816    5712 command_runner.go:130] ! I0318 12:43:32.918152       1 horizontal.go:200] "Starting HPA controller"
	I0318 12:44:39.457816    5712 command_runner.go:130] ! I0318 12:43:32.918638       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0318 12:44:39.457816    5712 command_runner.go:130] ! I0318 12:43:32.920489       1 controllermanager.go:642] "Started controller" controller="pod-garbage-collector-controller"
	I0318 12:44:39.457816    5712 command_runner.go:130] ! I0318 12:43:32.920802       1 gc_controller.go:101] "Starting GC controller"
	I0318 12:44:39.457944    5712 command_runner.go:130] ! I0318 12:43:32.922940       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0318 12:44:39.457944    5712 command_runner.go:130] ! I0318 12:43:32.923834       1 controllermanager.go:642] "Started controller" controller="endpointslice-controller"
	I0318 12:44:39.457944    5712 command_runner.go:130] ! I0318 12:43:32.924143       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0318 12:44:39.457944    5712 command_runner.go:130] ! I0318 12:43:32.924461       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0318 12:44:39.457944    5712 command_runner.go:130] ! I0318 12:43:32.935394       1 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
	I0318 12:44:39.457944    5712 command_runner.go:130] ! I0318 12:43:32.935610       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0318 12:44:39.457944    5712 command_runner.go:130] ! I0318 12:43:32.935623       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0318 12:44:39.457944    5712 command_runner.go:130] ! I0318 12:43:32.996434       1 controllermanager.go:642] "Started controller" controller="job-controller"
	I0318 12:44:39.457944    5712 command_runner.go:130] ! I0318 12:43:32.996586       1 job_controller.go:226] "Starting job controller"
	I0318 12:44:39.457944    5712 command_runner.go:130] ! I0318 12:43:32.996666       1 shared_informer.go:311] Waiting for caches to sync for job
	I0318 12:44:39.458093    5712 command_runner.go:130] ! I0318 12:43:33.085354       1 controllermanager.go:642] "Started controller" controller="disruption-controller"
	I0318 12:44:39.458093    5712 command_runner.go:130] ! I0318 12:43:33.086157       1 disruption.go:433] "Sending events to api server."
	I0318 12:44:39.458093    5712 command_runner.go:130] ! I0318 12:43:33.086235       1 disruption.go:444] "Starting disruption controller"
	I0318 12:44:39.458093    5712 command_runner.go:130] ! I0318 12:43:33.086245       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0318 12:44:39.458093    5712 command_runner.go:130] ! I0318 12:43:33.141477       1 controllermanager.go:642] "Started controller" controller="cronjob-controller"
	I0318 12:44:39.458162    5712 command_runner.go:130] ! I0318 12:43:33.142359       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0318 12:44:39.458162    5712 command_runner.go:130] ! I0318 12:43:33.142566       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0318 12:44:39.458229    5712 command_runner.go:130] ! I0318 12:43:33.186973       1 controllermanager.go:642] "Started controller" controller="endpoints-controller"
	I0318 12:44:39.458229    5712 command_runner.go:130] ! I0318 12:43:33.187335       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0318 12:44:39.458255    5712 command_runner.go:130] ! I0318 12:43:33.187410       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0318 12:44:39.458255    5712 command_runner.go:130] ! I0318 12:43:33.236517       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0318 12:44:39.458301    5712 command_runner.go:130] ! I0318 12:43:33.236982       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0318 12:44:39.458301    5712 command_runner.go:130] ! I0318 12:43:33.237471       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0318 12:44:39.458329    5712 command_runner.go:130] ! I0318 12:43:33.286539       1 controllermanager.go:642] "Started controller" controller="ttl-controller"
	I0318 12:44:39.458329    5712 command_runner.go:130] ! I0318 12:43:33.287154       1 ttl_controller.go:124] "Starting TTL controller"
	I0318 12:44:39.458389    5712 command_runner.go:130] ! I0318 12:43:33.287375       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0318 12:44:39.458413    5712 command_runner.go:130] ! I0318 12:43:43.355688       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.355845       1 controllermanager.go:642] "Started controller" controller="node-ipam-controller"
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.356879       1 node_ipam_controller.go:162] "Starting ipam controller"
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.357033       1 shared_informer.go:311] Waiting for caches to sync for node
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.359716       1 controllermanager.go:642] "Started controller" controller="clusterrole-aggregation-controller"
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.361043       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.361062       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.364706       1 controllermanager.go:642] "Started controller" controller="ttl-after-finished-controller"
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.364861       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.364989       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.369492       1 controllermanager.go:642] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.369675       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.369706       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.375944       1 controllermanager.go:642] "Started controller" controller="replicaset-controller"
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.376145       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.377600       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.390058       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.405940       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600\" does not exist"
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.408115       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.408433       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.408623       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600-m02\" does not exist"
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.408708       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600-m03\" does not exist"
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.408817       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.421506       1 shared_informer.go:318] Caches are synced for PV protection
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.446678       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.459596       1 shared_informer.go:318] Caches are synced for node
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.459833       1 range_allocator.go:174] "Sending events to api server"
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.460258       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.460829       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.461091       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.461418       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.463618       1 shared_informer.go:318] Caches are synced for namespace
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.466097       1 shared_informer.go:318] Caches are synced for taint
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.466427       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.466639       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.466863       1 taint_manager.go:210] "Sending events to api server"
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.468821       1 event.go:307] "Event occurred" object="multinode-642600" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600 event: Registered Node multinode-642600 in Controller"
	I0318 12:44:39.458981    5712 command_runner.go:130] ! I0318 12:43:43.469328       1 event.go:307] "Event occurred" object="multinode-642600-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600-m02 event: Registered Node multinode-642600-m02 in Controller"
	I0318 12:44:39.459028    5712 command_runner.go:130] ! I0318 12:43:43.469579       1 event.go:307] "Event occurred" object="multinode-642600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600-m03 event: Registered Node multinode-642600-m03 in Controller"
	I0318 12:44:39.459028    5712 command_runner.go:130] ! I0318 12:43:43.469959       1 shared_informer.go:318] Caches are synced for crt configmap
	I0318 12:44:39.459028    5712 command_runner.go:130] ! I0318 12:43:43.477268       1 shared_informer.go:318] Caches are synced for deployment
	I0318 12:44:39.459028    5712 command_runner.go:130] ! I0318 12:43:43.486297       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0318 12:44:39.459028    5712 command_runner.go:130] ! I0318 12:43:43.487082       1 shared_informer.go:318] Caches are synced for ephemeral
	I0318 12:44:39.459028    5712 command_runner.go:130] ! I0318 12:43:43.487171       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0318 12:44:39.459028    5712 command_runner.go:130] ! I0318 12:43:43.487768       1 shared_informer.go:318] Caches are synced for TTL
	I0318 12:44:39.459028    5712 command_runner.go:130] ! I0318 12:43:43.487848       1 shared_informer.go:318] Caches are synced for endpoint
	I0318 12:44:39.459028    5712 command_runner.go:130] ! I0318 12:43:43.489265       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0318 12:44:39.459028    5712 command_runner.go:130] ! I0318 12:43:43.497682       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0318 12:44:39.459028    5712 command_runner.go:130] ! I0318 12:43:43.498610       1 shared_informer.go:318] Caches are synced for stateful set
	I0318 12:44:39.459190    5712 command_runner.go:130] ! I0318 12:43:43.498725       1 shared_informer.go:318] Caches are synced for attach detach
	I0318 12:44:39.459190    5712 command_runner.go:130] ! I0318 12:43:43.501123       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-642600"
	I0318 12:44:39.459190    5712 command_runner.go:130] ! I0318 12:43:43.503362       1 shared_informer.go:318] Caches are synced for persistent volume
	I0318 12:44:39.459190    5712 command_runner.go:130] ! I0318 12:43:43.505991       1 shared_informer.go:318] Caches are synced for expand
	I0318 12:44:39.459241    5712 command_runner.go:130] ! I0318 12:43:43.503938       1 shared_informer.go:318] Caches are synced for PVC protection
	I0318 12:44:39.459241    5712 command_runner.go:130] ! I0318 12:43:43.506104       1 shared_informer.go:318] Caches are synced for service account
	I0318 12:44:39.459241    5712 command_runner.go:130] ! I0318 12:43:43.505782       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-642600-m02"
	I0318 12:44:39.459241    5712 command_runner.go:130] ! I0318 12:43:43.505818       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-642600-m03"
	I0318 12:44:39.459293    5712 command_runner.go:130] ! I0318 12:43:43.506356       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0318 12:44:39.459293    5712 command_runner.go:130] ! I0318 12:43:43.521010       1 shared_informer.go:318] Caches are synced for HPA
	I0318 12:44:39.459293    5712 command_runner.go:130] ! I0318 12:43:43.524230       1 shared_informer.go:318] Caches are synced for GC
	I0318 12:44:39.459334    5712 command_runner.go:130] ! I0318 12:43:43.527081       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0318 12:44:39.459334    5712 command_runner.go:130] ! I0318 12:43:43.534422       1 shared_informer.go:318] Caches are synced for daemon sets
	I0318 12:44:39.459334    5712 command_runner.go:130] ! I0318 12:43:43.537721       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0318 12:44:39.459334    5712 command_runner.go:130] ! I0318 12:43:43.545260       1 shared_informer.go:318] Caches are synced for cronjob
	I0318 12:44:39.459382    5712 command_runner.go:130] ! I0318 12:43:43.546769       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="57.454588ms"
	I0318 12:44:39.459382    5712 command_runner.go:130] ! I0318 12:43:43.547853       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="57.476888ms"
	I0318 12:44:39.459382    5712 command_runner.go:130] ! I0318 12:43:43.552128       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="66µs"
	I0318 12:44:39.459382    5712 command_runner.go:130] ! I0318 12:43:43.552429       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="130.199µs"
	I0318 12:44:39.459442    5712 command_runner.go:130] ! I0318 12:43:43.565701       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0318 12:44:39.459442    5712 command_runner.go:130] ! I0318 12:43:43.580927       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0318 12:44:39.459442    5712 command_runner.go:130] ! I0318 12:43:43.585098       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0318 12:44:39.459481    5712 command_runner.go:130] ! I0318 12:43:43.586663       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0318 12:44:39.459481    5712 command_runner.go:130] ! I0318 12:43:43.590461       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 12:44:39.459933    5712 command_runner.go:130] ! I0318 12:43:43.597830       1 shared_informer.go:318] Caches are synced for job
	I0318 12:44:39.459973    5712 command_runner.go:130] ! I0318 12:43:43.635734       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0318 12:44:39.459973    5712 command_runner.go:130] ! I0318 12:43:43.658493       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 12:44:39.459973    5712 command_runner.go:130] ! I0318 12:43:43.686534       1 shared_informer.go:318] Caches are synced for disruption
	I0318 12:44:39.459973    5712 command_runner.go:130] ! I0318 12:43:44.024395       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 12:44:39.459973    5712 command_runner.go:130] ! I0318 12:43:44.024760       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0318 12:44:39.459973    5712 command_runner.go:130] ! I0318 12:43:44.048280       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 12:44:39.459973    5712 command_runner.go:130] ! I0318 12:44:11.303411       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:39.459973    5712 command_runner.go:130] ! I0318 12:44:13.533509       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-48qkw" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-48qkw"
	I0318 12:44:39.459973    5712 command_runner.go:130] ! I0318 12:44:13.534203       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68-fgn7v" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-5dd5756b68-fgn7v"
	I0318 12:44:39.459973    5712 command_runner.go:130] ! I0318 12:44:13.534478       1 event.go:307] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I0318 12:44:39.459973    5712 command_runner.go:130] ! I0318 12:44:23.562573       1 event.go:307] "Event occurred" object="multinode-642600-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-642600-m02 status is now: NodeNotReady"
	I0318 12:44:39.459973    5712 command_runner.go:130] ! I0318 12:44:23.591486       1 event.go:307] "Event occurred" object="kube-system/kindnet-d5llj" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:39.459973    5712 command_runner.go:130] ! I0318 12:44:23.614671       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-vts9f" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:39.459973    5712 command_runner.go:130] ! I0318 12:44:23.639496       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-hmhdf" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:39.459973    5712 command_runner.go:130] ! I0318 12:44:23.661949       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="21.740356ms"
	I0318 12:44:39.459973    5712 command_runner.go:130] ! I0318 12:44:23.663289       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="50.499µs"
	I0318 12:44:39.459973    5712 command_runner.go:130] ! I0318 12:44:37.149797       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="76.1µs"
	I0318 12:44:39.459973    5712 command_runner.go:130] ! I0318 12:44:37.209300       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="28.125704ms"
	I0318 12:44:39.459973    5712 command_runner.go:130] ! I0318 12:44:37.209415       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="54.4µs"
	I0318 12:44:39.459973    5712 command_runner.go:130] ! I0318 12:44:37.245284       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="23.227968ms"
	I0318 12:44:39.459973    5712 command_runner.go:130] ! I0318 12:44:37.254358       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="3.872028ms"
	I0318 12:44:39.474545    5712 logs.go:123] Gathering logs for kindnet [9fec05a61d2a] ...
	I0318 12:44:39.474545    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fec05a61d2a"
	I0318 12:44:39.508838    5712 command_runner.go:130] ! I0318 12:43:33.429181       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0318 12:44:39.508838    5712 command_runner.go:130] ! I0318 12:43:33.431032       1 main.go:107] hostIP = 172.25.148.129
	I0318 12:44:39.508838    5712 command_runner.go:130] ! podIP = 172.25.148.129
	I0318 12:44:39.508838    5712 command_runner.go:130] ! I0318 12:43:33.432708       1 main.go:116] setting mtu 1500 for CNI 
	I0318 12:44:39.509813    5712 command_runner.go:130] ! I0318 12:43:33.432750       1 main.go:146] kindnetd IP family: "ipv4"
	I0318 12:44:39.509813    5712 command_runner.go:130] ! I0318 12:43:33.432773       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0318 12:44:39.509887    5712 command_runner.go:130] ! I0318 12:44:03.855331       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0318 12:44:39.509887    5712 command_runner.go:130] ! I0318 12:44:03.906638       1 main.go:223] Handling node with IPs: map[172.25.148.129:{}]
	I0318 12:44:39.509927    5712 command_runner.go:130] ! I0318 12:44:03.906763       1 main.go:227] handling current node
	I0318 12:44:39.509927    5712 command_runner.go:130] ! I0318 12:44:03.907280       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.509986    5712 command_runner.go:130] ! I0318 12:44:03.907371       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.509986    5712 command_runner.go:130] ! I0318 12:44:03.907763       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.25.159.102 Flags: [] Table: 0} 
	I0318 12:44:39.510024    5712 command_runner.go:130] ! I0318 12:44:03.907983       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:39.510024    5712 command_runner.go:130] ! I0318 12:44:03.907999       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:39.510024    5712 command_runner.go:130] ! I0318 12:44:03.908063       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.25.157.200 Flags: [] Table: 0} 
	I0318 12:44:39.510085    5712 command_runner.go:130] ! I0318 12:44:13.926166       1 main.go:223] Handling node with IPs: map[172.25.148.129:{}]
	I0318 12:44:39.510111    5712 command_runner.go:130] ! I0318 12:44:13.926260       1 main.go:227] handling current node
	I0318 12:44:39.510111    5712 command_runner.go:130] ! I0318 12:44:13.926281       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.510111    5712 command_runner.go:130] ! I0318 12:44:13.926377       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.510111    5712 command_runner.go:130] ! I0318 12:44:13.927231       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:39.510111    5712 command_runner.go:130] ! I0318 12:44:13.927364       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:39.510111    5712 command_runner.go:130] ! I0318 12:44:23.943396       1 main.go:223] Handling node with IPs: map[172.25.148.129:{}]
	I0318 12:44:39.510111    5712 command_runner.go:130] ! I0318 12:44:23.943437       1 main.go:227] handling current node
	I0318 12:44:39.510111    5712 command_runner.go:130] ! I0318 12:44:23.943450       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.510111    5712 command_runner.go:130] ! I0318 12:44:23.943456       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.510111    5712 command_runner.go:130] ! I0318 12:44:23.943816       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:39.510111    5712 command_runner.go:130] ! I0318 12:44:23.943956       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:39.510111    5712 command_runner.go:130] ! I0318 12:44:33.951114       1 main.go:223] Handling node with IPs: map[172.25.148.129:{}]
	I0318 12:44:39.510111    5712 command_runner.go:130] ! I0318 12:44:33.951215       1 main.go:227] handling current node
	I0318 12:44:39.510111    5712 command_runner.go:130] ! I0318 12:44:33.951232       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.510111    5712 command_runner.go:130] ! I0318 12:44:33.951241       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.510111    5712 command_runner.go:130] ! I0318 12:44:33.951807       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:39.510111    5712 command_runner.go:130] ! I0318 12:44:33.951927       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:39.513198    5712 logs.go:123] Gathering logs for Docker ...
	I0318 12:44:39.513198    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 12:44:39.554235    5712 command_runner.go:130] > Mar 18 12:41:52 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 12:44:39.554235    5712 command_runner.go:130] > Mar 18 12:41:52 minikube cri-dockerd[219]: time="2024-03-18T12:41:52Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 12:44:39.554314    5712 command_runner.go:130] > Mar 18 12:41:52 minikube cri-dockerd[219]: time="2024-03-18T12:41:52Z" level=info msg="Start docker client with request timeout 0s"
	I0318 12:44:39.554350    5712 command_runner.go:130] > Mar 18 12:41:52 minikube cri-dockerd[219]: time="2024-03-18T12:41:52Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0318 12:44:39.554350    5712 command_runner.go:130] > Mar 18 12:41:52 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0318 12:44:39.554350    5712 command_runner.go:130] > Mar 18 12:41:52 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 12:44:39.554420    5712 command_runner.go:130] > Mar 18 12:41:52 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 12:44:39.554420    5712 command_runner.go:130] > Mar 18 12:41:55 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0318 12:44:39.554420    5712 command_runner.go:130] > Mar 18 12:41:55 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0318 12:44:39.554420    5712 command_runner.go:130] > Mar 18 12:41:55 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 12:44:39.554420    5712 command_runner.go:130] > Mar 18 12:41:55 minikube cri-dockerd[404]: time="2024-03-18T12:41:55Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 12:44:39.554420    5712 command_runner.go:130] > Mar 18 12:41:55 minikube cri-dockerd[404]: time="2024-03-18T12:41:55Z" level=info msg="Start docker client with request timeout 0s"
	I0318 12:44:39.554420    5712 command_runner.go:130] > Mar 18 12:41:55 minikube cri-dockerd[404]: time="2024-03-18T12:41:55Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0318 12:44:39.554420    5712 command_runner.go:130] > Mar 18 12:41:55 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0318 12:44:39.554527    5712 command_runner.go:130] > Mar 18 12:41:55 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 12:44:39.554527    5712 command_runner.go:130] > Mar 18 12:41:55 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 12:44:39.554527    5712 command_runner.go:130] > Mar 18 12:41:57 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0318 12:44:39.554527    5712 command_runner.go:130] > Mar 18 12:41:57 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0318 12:44:39.554527    5712 command_runner.go:130] > Mar 18 12:41:57 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 12:44:39.554623    5712 command_runner.go:130] > Mar 18 12:41:57 minikube cri-dockerd[424]: time="2024-03-18T12:41:57Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 12:44:39.554623    5712 command_runner.go:130] > Mar 18 12:41:57 minikube cri-dockerd[424]: time="2024-03-18T12:41:57Z" level=info msg="Start docker client with request timeout 0s"
	I0318 12:44:39.554623    5712 command_runner.go:130] > Mar 18 12:41:57 minikube cri-dockerd[424]: time="2024-03-18T12:41:57Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0318 12:44:39.554623    5712 command_runner.go:130] > Mar 18 12:41:57 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0318 12:44:39.554623    5712 command_runner.go:130] > Mar 18 12:41:57 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 12:44:39.554623    5712 command_runner.go:130] > Mar 18 12:41:57 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 12:44:39.554623    5712 command_runner.go:130] > Mar 18 12:41:59 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0318 12:44:39.554729    5712 command_runner.go:130] > Mar 18 12:41:59 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0318 12:44:39.554729    5712 command_runner.go:130] > Mar 18 12:41:59 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0318 12:44:39.554729    5712 command_runner.go:130] > Mar 18 12:41:59 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 12:44:39.554729    5712 command_runner.go:130] > Mar 18 12:41:59 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 12:44:39.554729    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 systemd[1]: Starting Docker Application Container Engine...
	I0318 12:44:39.554823    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[652]: time="2024-03-18T12:42:46.799415676Z" level=info msg="Starting up"
	I0318 12:44:39.554823    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[652]: time="2024-03-18T12:42:46.800442474Z" level=info msg="containerd not running, starting managed containerd"
	I0318 12:44:39.554823    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[652]: time="2024-03-18T12:42:46.801655972Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=658
	I0318 12:44:39.554895    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.836542309Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0318 12:44:39.554924    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.866837154Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0318 12:44:39.554924    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.866991653Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0318 12:44:39.554924    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.867166153Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0318 12:44:39.554989    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.867346253Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:39.555015    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.868353051Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:39.555039    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.868455451Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:39.555097    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.868755450Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:39.555097    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.868785850Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:39.555148    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.868803850Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0318 12:44:39.555186    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.868815950Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:39.555186    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.869407649Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:39.555186    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.870171948Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:39.555265    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.873462742Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:39.555298    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.873569242Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:39.555298    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.873718241Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:39.555298    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.873818241Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0318 12:44:39.555382    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.874315040Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0318 12:44:39.555382    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.874434440Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0318 12:44:39.555452    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.874453940Z" level=info msg="metadata content store policy set" policy=shared
	I0318 12:44:39.555452    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880096930Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0318 12:44:39.555502    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880252829Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880377329Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880397729Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880414329Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880488329Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880819128Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880926428Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881236528Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881376427Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881400527Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881426127Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881441527Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881474927Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881491327Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881506427Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881521027Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881536227Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881566927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881586627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881601327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881617327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881631227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881646527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881659427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881673727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881757827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881783527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.556075    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881798027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.556075    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881812927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.556075    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881826827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.556075    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881844827Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0318 12:44:39.556075    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881868126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.556075    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881889326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.556186    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881902926Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0318 12:44:39.556224    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882002626Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0318 12:44:39.556224    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882117726Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0318 12:44:39.556224    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882162226Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0318 12:44:39.556300    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882178726Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0318 12:44:39.556336    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882242626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882337926Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882358926Z" level=info msg="NRI interface is disabled by configuration."
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882603625Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882759725Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.883033524Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.883153424Z" level=info msg="containerd successfully booted in 0.049971s"
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:42:47 multinode-642600 dockerd[652]: time="2024-03-18T12:42:47.858472851Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 dockerd[652]: time="2024-03-18T12:42:48.057442718Z" level=info msg="Loading containers: start."
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 dockerd[652]: time="2024-03-18T12:42:48.544395210Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 dockerd[652]: time="2024-03-18T12:42:48.632442528Z" level=info msg="Loading containers: done."
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 dockerd[652]: time="2024-03-18T12:42:48.662805631Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 dockerd[652]: time="2024-03-18T12:42:48.663682128Z" level=info msg="Daemon has completed initialization"
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 dockerd[652]: time="2024-03-18T12:42:48.725498031Z" level=info msg="API listen on [::]:2376"
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 systemd[1]: Started Docker Application Container Engine.
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 dockerd[652]: time="2024-03-18T12:42:48.725911430Z" level=info msg="API listen on /var/run/docker.sock"
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:43:15 multinode-642600 systemd[1]: Stopping Docker Application Container Engine...
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:43:15 multinode-642600 dockerd[652]: time="2024-03-18T12:43:15.631434936Z" level=info msg="Processing signal 'terminated'"
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:43:15 multinode-642600 dockerd[652]: time="2024-03-18T12:43:15.633587433Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:43:15 multinode-642600 dockerd[652]: time="2024-03-18T12:43:15.634258932Z" level=info msg="Daemon shutdown complete"
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:43:15 multinode-642600 dockerd[652]: time="2024-03-18T12:43:15.634450831Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:43:15 multinode-642600 dockerd[652]: time="2024-03-18T12:43:15.634476831Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 systemd[1]: docker.service: Deactivated successfully.
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 systemd[1]: Stopped Docker Application Container Engine.
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 systemd[1]: Starting Docker Application Container Engine...
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:16.717087499Z" level=info msg="Starting up"
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:16.718262797Z" level=info msg="containerd not running, starting managed containerd"
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:16.719705495Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1048
	I0318 12:44:39.556954    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.754738639Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0318 12:44:39.556954    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784193992Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0318 12:44:39.556954    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784236292Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0318 12:44:39.556954    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784275292Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0318 12:44:39.556954    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784291492Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:39.556954    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784317492Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:39.557080    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784331992Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:39.557080    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784550091Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:39.557176    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784651691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:39.557176    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784673391Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0318 12:44:39.557176    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784704091Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:39.557176    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784764391Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:39.557176    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784996290Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:39.557267    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.787641686Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:39.557267    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.787744286Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:39.557267    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.787950186Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:39.557267    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.788044886Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0318 12:44:39.557370    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.788091986Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0318 12:44:39.557370    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.788127185Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0318 12:44:39.557370    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.788138585Z" level=info msg="metadata content store policy set" policy=shared
	I0318 12:44:39.557370    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.789136284Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0318 12:44:39.557472    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.789269784Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0318 12:44:39.557472    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.789298984Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0318 12:44:39.557472    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.789320484Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0318 12:44:39.557472    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.789342084Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0318 12:44:39.557559    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.789644383Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0318 12:44:39.557559    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.790600382Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0318 12:44:39.557559    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.791760980Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0318 12:44:39.557559    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.791832280Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0318 12:44:39.557645    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.791851580Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0318 12:44:39.557645    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.791866579Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0318 12:44:39.557645    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.791880279Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0318 12:44:39.557645    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.791969479Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0318 12:44:39.557728    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.791989879Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0318 12:44:39.557728    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792004479Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0318 12:44:39.557728    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792018079Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0318 12:44:39.557728    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792030379Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0318 12:44:39.557829    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792042479Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0318 12:44:39.557829    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792063279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.557829    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792077879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.557829    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792090579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.557908    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792103979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.557908    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792117779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.557908    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792135679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.557908    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792148379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.557994    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792161279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.557994    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792174179Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.557994    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792188479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.557994    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792199579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.558074    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792211479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.558074    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792223379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.558074    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792238079Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0318 12:44:39.558074    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792261579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.558074    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792276079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.558165    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792287879Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0318 12:44:39.558165    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792337479Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0318 12:44:39.558165    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792356479Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0318 12:44:39.558252    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792368079Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0318 12:44:39.558252    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792380379Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0318 12:44:39.558252    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792530178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.558338    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792576778Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0318 12:44:39.558338    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792591078Z" level=info msg="NRI interface is disabled by configuration."
	I0318 12:44:39.558338    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792811378Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0318 12:44:39.558457    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792927678Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0318 12:44:39.558483    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.793108678Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0318 12:44:39.558483    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.793160477Z" level=info msg="containerd successfully booted in 0.039931s"
	I0318 12:44:39.558483    5712 command_runner.go:130] > Mar 18 12:43:17 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:17.767243919Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0318 12:44:39.558483    5712 command_runner.go:130] > Mar 18 12:43:17 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:17.800090666Z" level=info msg="Loading containers: start."
	I0318 12:44:39.558483    5712 command_runner.go:130] > Mar 18 12:43:18 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:18.103803081Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0318 12:44:39.558483    5712 command_runner.go:130] > Mar 18 12:43:18 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:18.187726546Z" level=info msg="Loading containers: done."
	I0318 12:44:39.558483    5712 command_runner.go:130] > Mar 18 12:43:18 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:18.216487100Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	I0318 12:44:39.558483    5712 command_runner.go:130] > Mar 18 12:43:18 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:18.216648600Z" level=info msg="Daemon has completed initialization"
	I0318 12:44:39.558483    5712 command_runner.go:130] > Mar 18 12:43:18 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:18.271691012Z" level=info msg="API listen on /var/run/docker.sock"
	I0318 12:44:39.558483    5712 command_runner.go:130] > Mar 18 12:43:18 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:18.271966711Z" level=info msg="API listen on [::]:2376"
	I0318 12:44:39.558483    5712 command_runner.go:130] > Mar 18 12:43:18 multinode-642600 systemd[1]: Started Docker Application Container Engine.
	I0318 12:44:39.558483    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 12:44:39.558483    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 12:44:39.558483    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Start docker client with request timeout 0s"
	I0318 12:44:39.558483    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0318 12:44:39.558483    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Loaded network plugin cni"
	I0318 12:44:39.558483    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0318 12:44:39.559108    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Docker Info: &{ID:aa9100d3-1595-41ce-b36f-06932aef3ecb Containers:18 ContainersRunning:0 ContainersPaused:0 ContainersStopped:18 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:26 OomKillDisable:false NGoroutines:53 SystemTime:2024-03-18T12:43:19.415553382Z LoggingDriver:json-file CgroupDriver:cgroupfs CgroupVersion:2 NEventsListener:0 Ke
rnelVersion:5.10.207 OperatingSystem:Buildroot 2023.02.9 OSVersion:2023.02.9 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0002da070 NCPU:2 MemTotal:2216210432 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:multinode-642600 Labels:[provider=hyperv] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dcf2847247e18caba8dce86522029642f60fe96b Expected:dcf2847247e18caba8dce86522029642f60fe96b} RuncCommit:{ID:51d5e94601ceffbbd85688df1c928ecccbfa4685 Expected:51d5e94601ceffbbd85688df1c928ecccbfa4685} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[nam
e=seccomp,profile=builtin name=cgroupns] ProductLicense:Community Engine DefaultAddressPools:[] Warnings:[]}"
	I0318 12:44:39.559108    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0318 12:44:39.559108    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0318 12:44:39.559217    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0318 12:44:39.559246    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Start cri-dockerd grpc backend"
	I0318 12:44:39.559246    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0318 12:44:39.559246    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:24Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-5dd5756b68-fgn7v_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"ed38da653fbefea9aeb0ebdb91f985394a7a792571704a4875018f5a6bc9abda\""
	I0318 12:44:39.559246    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:24Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-5b5d89c9d6-48qkw_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"29bb4d534c2e2b00dfe907d4443637851e3c3348e31bf00939cd6efad71c4e2e\""
	I0318 12:44:39.559246    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.316277241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:39.559373    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.317878239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:39.559373    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.318571937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.559373    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.319101537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.559373    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.356638277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:39.559373    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.356750476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:39.559488    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.356767376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.559488    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.357118676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.559488    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.418245378Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:39.559565    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.421018274Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:39.559565    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.421217073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.559565    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.422102972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.559647    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.428274662Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:39.559647    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.428365762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:39.559647    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.428455862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.559647    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.428580261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.559773    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/67004ee038ee4247f6f751987304426067a63cee8c1636408dd16efea728ba78/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:39.559773    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f62197122538f83943df8b19710794ea6ea9a9ffa884082a1a62435e9b152c3f/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:39.559773    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/eca6768355c74817c50b811b96b5fcc93a181c4968c53d4d4b0d0252ff6dbd0a/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:39.559859    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a7281d6e698ea2dc42d7d3093ccde32b770bf8367fdb58230694380f40daeb9f/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:39.559859    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.879224940Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:39.559940    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.879310840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:39.559940    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.879325040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.879857239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.050226267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.051715465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.056267457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.056729856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.064877643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.065332743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.065495042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.065849742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.091573301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.091639201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.091652401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.091761800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:30 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:30Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.923135971Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.924017669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.924165569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.924385369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.955673419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.955753819Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.955772119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.956168818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.964148405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:39.560617    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.964256705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:39.560617    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.964669604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.560617    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.964999404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.560713    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7a2f0ccaf5c4c6c0019124eda20c358dfa8aa20f0c92ade10aa3de32608e3527/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:39.560713    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/889c16eb0ab731956d02a28d0337dc6ff349dc574ba10d4fc1a939fb2e09d6d3/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:39.560790    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.391303322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:39.560790    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.391389722Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:39.560790    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.391408822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.560855    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.391535621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.560908    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.413113087Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:39.560908    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.413460286Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:39.560908    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.413726486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.560908    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.414492285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.560908    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5ecbdcbdad3fa79af8ef70896ae67d65b14c47b5811078c5d6d167e0f295d1bc/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:39.560908    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.850170088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:39.560908    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.850431387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:39.560908    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.850449987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.560908    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.850590387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.560908    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:03.011137468Z" level=info msg="shim disconnected" id=787ade2ea2cd019bbcde5523b750659f59b0187d9de4b757fba8a14f9a126460 namespace=moby
	I0318 12:44:39.560908    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:03.011334567Z" level=warning msg="cleaning up after shim disconnected" id=787ade2ea2cd019bbcde5523b750659f59b0187d9de4b757fba8a14f9a126460 namespace=moby
	I0318 12:44:39.560908    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:03.011364567Z" level=info msg="cleaning up dead shim" namespace=moby
	I0318 12:44:39.560908    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 dockerd[1042]: time="2024-03-18T12:44:03.012148165Z" level=info msg="ignoring event" container=787ade2ea2cd019bbcde5523b750659f59b0187d9de4b757fba8a14f9a126460 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0318 12:44:39.560908    5712 command_runner.go:130] > Mar 18 12:44:17 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:17.562340104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:39.560908    5712 command_runner.go:130] > Mar 18 12:44:17 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:17.562524303Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:39.560908    5712 command_runner.go:130] > Mar 18 12:44:17 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:17.562584503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.561449    5712 command_runner.go:130] > Mar 18 12:44:17 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:17.563253802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.561449    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.376262769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:39.561449    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.376780468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:39.561449    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.377021468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.561449    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.377223268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.561553    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:44:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1090dd57409807a15613607fd810b67863a9dd9c5a8512d7a6720906641c7f26/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:39.561553    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.684170919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:39.561553    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.684458920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:39.561618    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.684558520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.561644    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.685142822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.561674    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.901354745Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:39.561714    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.901518146Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:39.561714    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.901538746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.561714    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.901651446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.561839    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:44:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e1b2432b0ed66a1175586c13232eb9b9239f18a4f9a86e2a0c5f48c1407fdb14/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0318 12:44:39.561839    5712 command_runner.go:130] > Mar 18 12:44:36 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:36.227440411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:39.561839    5712 command_runner.go:130] > Mar 18 12:44:36 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:36.227939926Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:39.561839    5712 command_runner.go:130] > Mar 18 12:44:36 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:36.228081131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.561981    5712 command_runner.go:130] > Mar 18 12:44:36 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:36.228507343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.561981    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:39.561981    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:39.562089    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:39.562089    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:39.562089    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:39.562162    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:39.562162    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:39.597174    5712 logs.go:123] Gathering logs for describe nodes ...
	I0318 12:44:39.597174    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 12:44:39.848049    5712 command_runner.go:130] > Name:               multinode-642600
	I0318 12:44:39.848049    5712 command_runner.go:130] > Roles:              control-plane
	I0318 12:44:39.848049    5712 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0318 12:44:39.848049    5712 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0318 12:44:39.848049    5712 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0318 12:44:39.848049    5712 command_runner.go:130] >                     kubernetes.io/hostname=multinode-642600
	I0318 12:44:39.848049    5712 command_runner.go:130] >                     kubernetes.io/os=linux
	I0318 12:44:39.848049    5712 command_runner.go:130] >                     minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd
	I0318 12:44:39.848049    5712 command_runner.go:130] >                     minikube.k8s.io/name=multinode-642600
	I0318 12:44:39.848049    5712 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0318 12:44:39.848049    5712 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_18T12_18_52_0700
	I0318 12:44:39.848049    5712 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0318 12:44:39.848049    5712 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0318 12:44:39.848049    5712 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0318 12:44:39.848049    5712 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0318 12:44:39.848049    5712 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0318 12:44:39.848049    5712 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0318 12:44:39.848049    5712 command_runner.go:130] > CreationTimestamp:  Mon, 18 Mar 2024 12:18:46 +0000
	I0318 12:44:39.848049    5712 command_runner.go:130] > Taints:             <none>
	I0318 12:44:39.848049    5712 command_runner.go:130] > Unschedulable:      false
	I0318 12:44:39.848049    5712 command_runner.go:130] > Lease:
	I0318 12:44:39.848049    5712 command_runner.go:130] >   HolderIdentity:  multinode-642600
	I0318 12:44:39.848049    5712 command_runner.go:130] >   AcquireTime:     <unset>
	I0318 12:44:39.848049    5712 command_runner.go:130] >   RenewTime:       Mon, 18 Mar 2024 12:44:31 +0000
	I0318 12:44:39.848049    5712 command_runner.go:130] > Conditions:
	I0318 12:44:39.848049    5712 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0318 12:44:39.849048    5712 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0318 12:44:39.849048    5712 command_runner.go:130] >   MemoryPressure   False   Mon, 18 Mar 2024 12:44:11 +0000   Mon, 18 Mar 2024 12:18:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0318 12:44:39.849048    5712 command_runner.go:130] >   DiskPressure     False   Mon, 18 Mar 2024 12:44:11 +0000   Mon, 18 Mar 2024 12:18:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0318 12:44:39.849048    5712 command_runner.go:130] >   PIDPressure      False   Mon, 18 Mar 2024 12:44:11 +0000   Mon, 18 Mar 2024 12:18:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0318 12:44:39.849048    5712 command_runner.go:130] >   Ready            True    Mon, 18 Mar 2024 12:44:11 +0000   Mon, 18 Mar 2024 12:44:11 +0000   KubeletReady                 kubelet is posting ready status
	I0318 12:44:39.849048    5712 command_runner.go:130] > Addresses:
	I0318 12:44:39.849048    5712 command_runner.go:130] >   InternalIP:  172.25.148.129
	I0318 12:44:39.849048    5712 command_runner.go:130] >   Hostname:    multinode-642600
	I0318 12:44:39.849048    5712 command_runner.go:130] > Capacity:
	I0318 12:44:39.849048    5712 command_runner.go:130] >   cpu:                2
	I0318 12:44:39.849048    5712 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 12:44:39.849048    5712 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 12:44:39.849048    5712 command_runner.go:130] >   memory:             2164268Ki
	I0318 12:44:39.849048    5712 command_runner.go:130] >   pods:               110
	I0318 12:44:39.849048    5712 command_runner.go:130] > Allocatable:
	I0318 12:44:39.849048    5712 command_runner.go:130] >   cpu:                2
	I0318 12:44:39.849048    5712 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 12:44:39.849048    5712 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 12:44:39.849048    5712 command_runner.go:130] >   memory:             2164268Ki
	I0318 12:44:39.849048    5712 command_runner.go:130] >   pods:               110
	I0318 12:44:39.849048    5712 command_runner.go:130] > System Info:
	I0318 12:44:39.849048    5712 command_runner.go:130] >   Machine ID:                 021cb44913fc4689ab25739f723ae3da
	I0318 12:44:39.849048    5712 command_runner.go:130] >   System UUID:                8a1bcbab-f132-7f42-b33a-a7db97e0afe6
	I0318 12:44:39.849048    5712 command_runner.go:130] >   Boot ID:                    f11360a5-920e-4374-9d22-d06f111079d8
	I0318 12:44:39.849048    5712 command_runner.go:130] >   Kernel Version:             5.10.207
	I0318 12:44:39.849048    5712 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0318 12:44:39.849048    5712 command_runner.go:130] >   Operating System:           linux
	I0318 12:44:39.849048    5712 command_runner.go:130] >   Architecture:               amd64
	I0318 12:44:39.849048    5712 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0318 12:44:39.849048    5712 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0318 12:44:39.849048    5712 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0318 12:44:39.849048    5712 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0318 12:44:39.849048    5712 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0318 12:44:39.849048    5712 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0318 12:44:39.849048    5712 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0318 12:44:39.849048    5712 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0318 12:44:39.849048    5712 command_runner.go:130] >   default                     busybox-5b5d89c9d6-48qkw                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	I0318 12:44:39.849048    5712 command_runner.go:130] >   kube-system                 coredns-5dd5756b68-fgn7v                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     25m
	I0318 12:44:39.849048    5712 command_runner.go:130] >   kube-system                 etcd-multinode-642600                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         68s
	I0318 12:44:39.849048    5712 command_runner.go:130] >   kube-system                 kindnet-kpt4f                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      25m
	I0318 12:44:39.849048    5712 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-642600             250m (12%)    0 (0%)      0 (0%)           0 (0%)         68s
	I0318 12:44:39.849048    5712 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-642600    200m (10%)    0 (0%)      0 (0%)           0 (0%)         25m
	I0318 12:44:39.849048    5712 command_runner.go:130] >   kube-system                 kube-proxy-4dg79                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
	I0318 12:44:39.849048    5712 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-642600             100m (5%)     0 (0%)      0 (0%)           0 (0%)         25m
	I0318 12:44:39.849048    5712 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
	I0318 12:44:39.849048    5712 command_runner.go:130] > Allocated resources:
	I0318 12:44:39.849048    5712 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0318 12:44:39.849048    5712 command_runner.go:130] >   Resource           Requests     Limits
	I0318 12:44:39.849048    5712 command_runner.go:130] >   --------           --------     ------
	I0318 12:44:39.849048    5712 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0318 12:44:39.849048    5712 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0318 12:44:39.849851    5712 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0318 12:44:39.849851    5712 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0318 12:44:39.849851    5712 command_runner.go:130] > Events:
	I0318 12:44:39.849851    5712 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0318 12:44:39.849851    5712 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0318 12:44:39.849851    5712 command_runner.go:130] >   Normal  Starting                 25m                kube-proxy       
	I0318 12:44:39.849851    5712 command_runner.go:130] >   Normal  Starting                 65s                kube-proxy       
	I0318 12:44:39.849851    5712 command_runner.go:130] >   Normal  Starting                 25m                kubelet          Starting kubelet.
	I0318 12:44:39.849851    5712 command_runner.go:130] >   Normal  NodeHasSufficientMemory  25m (x8 over 25m)  kubelet          Node multinode-642600 status is now: NodeHasSufficientMemory
	I0318 12:44:39.849997    5712 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    25m (x8 over 25m)  kubelet          Node multinode-642600 status is now: NodeHasNoDiskPressure
	I0318 12:44:39.849997    5712 command_runner.go:130] >   Normal  NodeHasSufficientPID     25m (x7 over 25m)  kubelet          Node multinode-642600 status is now: NodeHasSufficientPID
	I0318 12:44:39.850029    5712 command_runner.go:130] >   Normal  NodeAllocatableEnforced  25m                kubelet          Updated Node Allocatable limit across pods
	I0318 12:44:39.850029    5712 command_runner.go:130] >   Normal  Starting                 25m                kubelet          Starting kubelet.
	I0318 12:44:39.850060    5712 command_runner.go:130] >   Normal  NodeHasSufficientPID     25m                kubelet          Node multinode-642600 status is now: NodeHasSufficientPID
	I0318 12:44:39.850060    5712 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    25m                kubelet          Node multinode-642600 status is now: NodeHasNoDiskPressure
	I0318 12:44:39.850060    5712 command_runner.go:130] >   Normal  NodeAllocatableEnforced  25m                kubelet          Updated Node Allocatable limit across pods
	I0318 12:44:39.850060    5712 command_runner.go:130] >   Normal  NodeHasSufficientMemory  25m                kubelet          Node multinode-642600 status is now: NodeHasSufficientMemory
	I0318 12:44:39.850060    5712 command_runner.go:130] >   Normal  RegisteredNode           25m                node-controller  Node multinode-642600 event: Registered Node multinode-642600 in Controller
	I0318 12:44:39.850060    5712 command_runner.go:130] >   Normal  NodeReady                25m                kubelet          Node multinode-642600 status is now: NodeReady
	I0318 12:44:39.850060    5712 command_runner.go:130] >   Normal  Starting                 75s                kubelet          Starting kubelet.
	I0318 12:44:39.850060    5712 command_runner.go:130] >   Normal  NodeHasSufficientMemory  75s (x8 over 75s)  kubelet          Node multinode-642600 status is now: NodeHasSufficientMemory
	I0318 12:44:39.850060    5712 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    75s (x8 over 75s)  kubelet          Node multinode-642600 status is now: NodeHasNoDiskPressure
	I0318 12:44:39.850060    5712 command_runner.go:130] >   Normal  NodeHasSufficientPID     75s (x7 over 75s)  kubelet          Node multinode-642600 status is now: NodeHasSufficientPID
	I0318 12:44:39.850060    5712 command_runner.go:130] >   Normal  NodeAllocatableEnforced  75s                kubelet          Updated Node Allocatable limit across pods
	I0318 12:44:39.850060    5712 command_runner.go:130] >   Normal  RegisteredNode           56s                node-controller  Node multinode-642600 event: Registered Node multinode-642600 in Controller
	I0318 12:44:39.850060    5712 command_runner.go:130] > Name:               multinode-642600-m02
	I0318 12:44:39.850060    5712 command_runner.go:130] > Roles:              <none>
	I0318 12:44:39.850060    5712 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0318 12:44:39.850060    5712 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0318 12:44:39.850060    5712 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0318 12:44:39.850060    5712 command_runner.go:130] >                     kubernetes.io/hostname=multinode-642600-m02
	I0318 12:44:39.850060    5712 command_runner.go:130] >                     kubernetes.io/os=linux
	I0318 12:44:39.850060    5712 command_runner.go:130] >                     minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd
	I0318 12:44:39.850060    5712 command_runner.go:130] >                     minikube.k8s.io/name=multinode-642600
	I0318 12:44:39.850060    5712 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0318 12:44:39.850060    5712 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_18T12_22_13_0700
	I0318 12:44:39.850060    5712 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0318 12:44:39.850060    5712 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0318 12:44:39.850060    5712 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0318 12:44:39.850060    5712 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0318 12:44:39.850060    5712 command_runner.go:130] > CreationTimestamp:  Mon, 18 Mar 2024 12:22:12 +0000
	I0318 12:44:39.850060    5712 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0318 12:44:39.850060    5712 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0318 12:44:39.850060    5712 command_runner.go:130] > Unschedulable:      false
	I0318 12:44:39.850060    5712 command_runner.go:130] > Lease:
	I0318 12:44:39.850060    5712 command_runner.go:130] >   HolderIdentity:  multinode-642600-m02
	I0318 12:44:39.850060    5712 command_runner.go:130] >   AcquireTime:     <unset>
	I0318 12:44:39.850060    5712 command_runner.go:130] >   RenewTime:       Mon, 18 Mar 2024 12:40:15 +0000
	I0318 12:44:39.850060    5712 command_runner.go:130] > Conditions:
	I0318 12:44:39.850060    5712 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0318 12:44:39.850060    5712 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0318 12:44:39.850060    5712 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 18 Mar 2024 12:38:34 +0000   Mon, 18 Mar 2024 12:44:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:39.850060    5712 command_runner.go:130] >   DiskPressure     Unknown   Mon, 18 Mar 2024 12:38:34 +0000   Mon, 18 Mar 2024 12:44:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:39.850603    5712 command_runner.go:130] >   PIDPressure      Unknown   Mon, 18 Mar 2024 12:38:34 +0000   Mon, 18 Mar 2024 12:44:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:39.850603    5712 command_runner.go:130] >   Ready            Unknown   Mon, 18 Mar 2024 12:38:34 +0000   Mon, 18 Mar 2024 12:44:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:39.850603    5712 command_runner.go:130] > Addresses:
	I0318 12:44:39.850603    5712 command_runner.go:130] >   InternalIP:  172.25.159.102
	I0318 12:44:39.850603    5712 command_runner.go:130] >   Hostname:    multinode-642600-m02
	I0318 12:44:39.850603    5712 command_runner.go:130] > Capacity:
	I0318 12:44:39.850603    5712 command_runner.go:130] >   cpu:                2
	I0318 12:44:39.850603    5712 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 12:44:39.850603    5712 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 12:44:39.850739    5712 command_runner.go:130] >   memory:             2164268Ki
	I0318 12:44:39.850739    5712 command_runner.go:130] >   pods:               110
	I0318 12:44:39.850739    5712 command_runner.go:130] > Allocatable:
	I0318 12:44:39.850775    5712 command_runner.go:130] >   cpu:                2
	I0318 12:44:39.850775    5712 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 12:44:39.850775    5712 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 12:44:39.850804    5712 command_runner.go:130] >   memory:             2164268Ki
	I0318 12:44:39.850804    5712 command_runner.go:130] >   pods:               110
	I0318 12:44:39.850804    5712 command_runner.go:130] > System Info:
	I0318 12:44:39.850804    5712 command_runner.go:130] >   Machine ID:                 3840c114554e41ff9ded1410244d8aba
	I0318 12:44:39.850804    5712 command_runner.go:130] >   System UUID:                23dbf5b1-f940-4749-8caf-1ae12d869a30
	I0318 12:44:39.850804    5712 command_runner.go:130] >   Boot ID:                    9a3fcab5-beb6-4505-b112-82809850bba3
	I0318 12:44:39.850804    5712 command_runner.go:130] >   Kernel Version:             5.10.207
	I0318 12:44:39.850804    5712 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0318 12:44:39.850804    5712 command_runner.go:130] >   Operating System:           linux
	I0318 12:44:39.850804    5712 command_runner.go:130] >   Architecture:               amd64
	I0318 12:44:39.850804    5712 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0318 12:44:39.850804    5712 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0318 12:44:39.850804    5712 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0318 12:44:39.850804    5712 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0318 12:44:39.850804    5712 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0318 12:44:39.850804    5712 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0318 12:44:39.850804    5712 command_runner.go:130] >   Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0318 12:44:39.850804    5712 command_runner.go:130] >   ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	I0318 12:44:39.850804    5712 command_runner.go:130] >   default                     busybox-5b5d89c9d6-hmhdf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	I0318 12:44:39.850804    5712 command_runner.go:130] >   kube-system                 kindnet-d5llj               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      22m
	I0318 12:44:39.850804    5712 command_runner.go:130] >   kube-system                 kube-proxy-vts9f            0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	I0318 12:44:39.850804    5712 command_runner.go:130] > Allocated resources:
	I0318 12:44:39.850804    5712 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0318 12:44:39.850804    5712 command_runner.go:130] >   Resource           Requests   Limits
	I0318 12:44:39.850804    5712 command_runner.go:130] >   --------           --------   ------
	I0318 12:44:39.850804    5712 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0318 12:44:39.850804    5712 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0318 12:44:39.850804    5712 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0318 12:44:39.850804    5712 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0318 12:44:39.850804    5712 command_runner.go:130] > Events:
	I0318 12:44:39.850804    5712 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0318 12:44:39.850804    5712 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0318 12:44:39.850804    5712 command_runner.go:130] >   Normal  Starting                 22m                kube-proxy       
	I0318 12:44:39.850804    5712 command_runner.go:130] >   Normal  NodeHasSufficientMemory  22m (x5 over 22m)  kubelet          Node multinode-642600-m02 status is now: NodeHasSufficientMemory
	I0318 12:44:39.850804    5712 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    22m (x5 over 22m)  kubelet          Node multinode-642600-m02 status is now: NodeHasNoDiskPressure
	I0318 12:44:39.850804    5712 command_runner.go:130] >   Normal  NodeHasSufficientPID     22m (x5 over 22m)  kubelet          Node multinode-642600-m02 status is now: NodeHasSufficientPID
	I0318 12:44:39.850804    5712 command_runner.go:130] >   Normal  RegisteredNode           22m                node-controller  Node multinode-642600-m02 event: Registered Node multinode-642600-m02 in Controller
	I0318 12:44:39.850804    5712 command_runner.go:130] >   Normal  NodeReady                22m                kubelet          Node multinode-642600-m02 status is now: NodeReady
	I0318 12:44:39.850804    5712 command_runner.go:130] >   Normal  RegisteredNode           56s                node-controller  Node multinode-642600-m02 event: Registered Node multinode-642600-m02 in Controller
	I0318 12:44:39.850804    5712 command_runner.go:130] >   Normal  NodeNotReady             16s                node-controller  Node multinode-642600-m02 status is now: NodeNotReady
	I0318 12:44:39.850804    5712 command_runner.go:130] > Name:               multinode-642600-m03
	I0318 12:44:39.850804    5712 command_runner.go:130] > Roles:              <none>
	I0318 12:44:39.850804    5712 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0318 12:44:39.850804    5712 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0318 12:44:39.850804    5712 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0318 12:44:39.850804    5712 command_runner.go:130] >                     kubernetes.io/hostname=multinode-642600-m03
	I0318 12:44:39.850804    5712 command_runner.go:130] >                     kubernetes.io/os=linux
	I0318 12:44:39.850804    5712 command_runner.go:130] >                     minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd
	I0318 12:44:39.850804    5712 command_runner.go:130] >                     minikube.k8s.io/name=multinode-642600
	I0318 12:44:39.850804    5712 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0318 12:44:39.850804    5712 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_18T12_38_47_0700
	I0318 12:44:39.850804    5712 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0318 12:44:39.850804    5712 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0318 12:44:39.850804    5712 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0318 12:44:39.850804    5712 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0318 12:44:39.851382    5712 command_runner.go:130] > CreationTimestamp:  Mon, 18 Mar 2024 12:38:46 +0000
	I0318 12:44:39.851382    5712 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0318 12:44:39.851382    5712 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0318 12:44:39.851382    5712 command_runner.go:130] > Unschedulable:      false
	I0318 12:44:39.851382    5712 command_runner.go:130] > Lease:
	I0318 12:44:39.851382    5712 command_runner.go:130] >   HolderIdentity:  multinode-642600-m03
	I0318 12:44:39.851382    5712 command_runner.go:130] >   AcquireTime:     <unset>
	I0318 12:44:39.851382    5712 command_runner.go:130] >   RenewTime:       Mon, 18 Mar 2024 12:39:48 +0000
	I0318 12:44:39.851382    5712 command_runner.go:130] > Conditions:
	I0318 12:44:39.851382    5712 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0318 12:44:39.851382    5712 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0318 12:44:39.851382    5712 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 18 Mar 2024 12:38:52 +0000   Mon, 18 Mar 2024 12:40:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:39.851382    5712 command_runner.go:130] >   DiskPressure     Unknown   Mon, 18 Mar 2024 12:38:52 +0000   Mon, 18 Mar 2024 12:40:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:39.851382    5712 command_runner.go:130] >   PIDPressure      Unknown   Mon, 18 Mar 2024 12:38:52 +0000   Mon, 18 Mar 2024 12:40:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:39.851557    5712 command_runner.go:130] >   Ready            Unknown   Mon, 18 Mar 2024 12:38:52 +0000   Mon, 18 Mar 2024 12:40:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:39.851557    5712 command_runner.go:130] > Addresses:
	I0318 12:44:39.851557    5712 command_runner.go:130] >   InternalIP:  172.25.157.200
	I0318 12:44:39.851557    5712 command_runner.go:130] >   Hostname:    multinode-642600-m03
	I0318 12:44:39.851557    5712 command_runner.go:130] > Capacity:
	I0318 12:44:39.851557    5712 command_runner.go:130] >   cpu:                2
	I0318 12:44:39.851557    5712 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 12:44:39.851557    5712 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 12:44:39.851557    5712 command_runner.go:130] >   memory:             2164268Ki
	I0318 12:44:39.851642    5712 command_runner.go:130] >   pods:               110
	I0318 12:44:39.851642    5712 command_runner.go:130] > Allocatable:
	I0318 12:44:39.851642    5712 command_runner.go:130] >   cpu:                2
	I0318 12:44:39.851642    5712 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 12:44:39.851642    5712 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 12:44:39.851642    5712 command_runner.go:130] >   memory:             2164268Ki
	I0318 12:44:39.851642    5712 command_runner.go:130] >   pods:               110
	I0318 12:44:39.851642    5712 command_runner.go:130] > System Info:
	I0318 12:44:39.851722    5712 command_runner.go:130] >   Machine ID:                 b858c7f1c1bc42a69e1927ccc26ea5ce
	I0318 12:44:39.851722    5712 command_runner.go:130] >   System UUID:                8c4fd36f-ab8b-5447-9df2-542afafc5ab4
	I0318 12:44:39.851722    5712 command_runner.go:130] >   Boot ID:                    cea0ecfe-24ab-4614-a808-1e2a7a960f26
	I0318 12:44:39.851722    5712 command_runner.go:130] >   Kernel Version:             5.10.207
	I0318 12:44:39.851722    5712 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0318 12:44:39.851722    5712 command_runner.go:130] >   Operating System:           linux
	I0318 12:44:39.851722    5712 command_runner.go:130] >   Architecture:               amd64
	I0318 12:44:39.851722    5712 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0318 12:44:39.851722    5712 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0318 12:44:39.851839    5712 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0318 12:44:39.851839    5712 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0318 12:44:39.851839    5712 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0318 12:44:39.851839    5712 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0318 12:44:39.851839    5712 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0318 12:44:39.851839    5712 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0318 12:44:39.851839    5712 command_runner.go:130] >   kube-system                 kindnet-thkjp       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      17m
	I0318 12:44:39.851934    5712 command_runner.go:130] >   kube-system                 kube-proxy-khbjt    0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	I0318 12:44:39.851934    5712 command_runner.go:130] > Allocated resources:
	I0318 12:44:39.851934    5712 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0318 12:44:39.851934    5712 command_runner.go:130] >   Resource           Requests   Limits
	I0318 12:44:39.851934    5712 command_runner.go:130] >   --------           --------   ------
	I0318 12:44:39.851934    5712 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0318 12:44:39.852011    5712 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0318 12:44:39.852011    5712 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0318 12:44:39.852011    5712 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0318 12:44:39.852011    5712 command_runner.go:130] > Events:
	I0318 12:44:39.852011    5712 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0318 12:44:39.852108    5712 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0318 12:44:39.852108    5712 command_runner.go:130] >   Normal  Starting                 17m                    kube-proxy       
	I0318 12:44:39.852108    5712 command_runner.go:130] >   Normal  Starting                 5m50s                  kube-proxy       
	I0318 12:44:39.852108    5712 command_runner.go:130] >   Normal  NodeHasSufficientMemory  17m (x5 over 17m)      kubelet          Node multinode-642600-m03 status is now: NodeHasSufficientMemory
	I0318 12:44:39.852187    5712 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    17m (x5 over 17m)      kubelet          Node multinode-642600-m03 status is now: NodeHasNoDiskPressure
	I0318 12:44:39.852187    5712 command_runner.go:130] >   Normal  NodeHasSufficientPID     17m (x5 over 17m)      kubelet          Node multinode-642600-m03 status is now: NodeHasSufficientPID
	I0318 12:44:39.852187    5712 command_runner.go:130] >   Normal  NodeReady                17m                    kubelet          Node multinode-642600-m03 status is now: NodeReady
	I0318 12:44:39.852187    5712 command_runner.go:130] >   Normal  Starting                 5m53s                  kubelet          Starting kubelet.
	I0318 12:44:39.852187    5712 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m53s (x2 over 5m53s)  kubelet          Node multinode-642600-m03 status is now: NodeHasSufficientMemory
	I0318 12:44:39.852187    5712 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m53s (x2 over 5m53s)  kubelet          Node multinode-642600-m03 status is now: NodeHasNoDiskPressure
	I0318 12:44:39.852274    5712 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m53s (x2 over 5m53s)  kubelet          Node multinode-642600-m03 status is now: NodeHasSufficientPID
	I0318 12:44:39.852274    5712 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m53s                  kubelet          Updated Node Allocatable limit across pods
	I0318 12:44:39.852274    5712 command_runner.go:130] >   Normal  RegisteredNode           5m52s                  node-controller  Node multinode-642600-m03 event: Registered Node multinode-642600-m03 in Controller
	I0318 12:44:39.852274    5712 command_runner.go:130] >   Normal  NodeReady                5m47s                  kubelet          Node multinode-642600-m03 status is now: NodeReady
	I0318 12:44:39.852274    5712 command_runner.go:130] >   Normal  NodeNotReady             4m6s                   node-controller  Node multinode-642600-m03 status is now: NodeNotReady
	I0318 12:44:39.852360    5712 command_runner.go:130] >   Normal  RegisteredNode           56s                    node-controller  Node multinode-642600-m03 event: Registered Node multinode-642600-m03 in Controller
	I0318 12:44:39.863571    5712 logs.go:123] Gathering logs for kube-apiserver [a48a6d310b86] ...
	I0318 12:44:39.863571    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a48a6d310b86"
	I0318 12:44:39.896631    5712 command_runner.go:130] ! I0318 12:43:26.873064       1 options.go:220] external host was not specified, using 172.25.148.129
	I0318 12:44:39.896631    5712 command_runner.go:130] ! I0318 12:43:26.879001       1 server.go:148] Version: v1.28.4
	I0318 12:44:39.896631    5712 command_runner.go:130] ! I0318 12:43:26.879883       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:39.896631    5712 command_runner.go:130] ! I0318 12:43:27.623853       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0318 12:44:39.896631    5712 command_runner.go:130] ! I0318 12:43:27.658081       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0318 12:44:39.896631    5712 command_runner.go:130] ! I0318 12:43:27.658128       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0318 12:44:39.896631    5712 command_runner.go:130] ! I0318 12:43:27.660963       1 instance.go:298] Using reconciler: lease
	I0318 12:44:39.896631    5712 command_runner.go:130] ! I0318 12:43:27.814829       1 handler.go:232] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0318 12:44:39.896631    5712 command_runner.go:130] ! W0318 12:43:27.815233       1 genericapiserver.go:744] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:39.896631    5712 command_runner.go:130] ! I0318 12:43:28.557814       1 handler.go:232] Adding GroupVersion  v1 to ResourceManager
	I0318 12:44:39.896631    5712 command_runner.go:130] ! I0318 12:43:28.558168       1 instance.go:709] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0318 12:44:39.896631    5712 command_runner.go:130] ! I0318 12:43:29.283146       1 instance.go:709] API group "resource.k8s.io" is not enabled, skipping.
	I0318 12:44:39.896631    5712 command_runner.go:130] ! I0318 12:43:29.346403       1 handler.go:232] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0318 12:44:39.896631    5712 command_runner.go:130] ! W0318 12:43:29.360856       1 genericapiserver.go:744] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:39.896631    5712 command_runner.go:130] ! W0318 12:43:29.360910       1 genericapiserver.go:744] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:39.896631    5712 command_runner.go:130] ! I0318 12:43:29.361419       1 handler.go:232] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0318 12:44:39.896631    5712 command_runner.go:130] ! W0318 12:43:29.361431       1 genericapiserver.go:744] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:39.896631    5712 command_runner.go:130] ! I0318 12:43:29.362356       1 handler.go:232] Adding GroupVersion autoscaling v2 to ResourceManager
	I0318 12:44:39.896631    5712 command_runner.go:130] ! I0318 12:43:29.365115       1 handler.go:232] Adding GroupVersion autoscaling v1 to ResourceManager
	I0318 12:44:39.896631    5712 command_runner.go:130] ! W0318 12:43:29.365134       1 genericapiserver.go:744] Skipping API autoscaling/v2beta1 because it has no resources.
	I0318 12:44:39.896631    5712 command_runner.go:130] ! W0318 12:43:29.365140       1 genericapiserver.go:744] Skipping API autoscaling/v2beta2 because it has no resources.
	I0318 12:44:39.896631    5712 command_runner.go:130] ! I0318 12:43:29.370774       1 handler.go:232] Adding GroupVersion batch v1 to ResourceManager
	I0318 12:44:39.896631    5712 command_runner.go:130] ! W0318 12:43:29.370809       1 genericapiserver.go:744] Skipping API batch/v1beta1 because it has no resources.
	I0318 12:44:39.896631    5712 command_runner.go:130] ! I0318 12:43:29.375063       1 handler.go:232] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0318 12:44:39.896631    5712 command_runner.go:130] ! W0318 12:43:29.375102       1 genericapiserver.go:744] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:39.896631    5712 command_runner.go:130] ! W0318 12:43:29.375108       1 genericapiserver.go:744] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:39.896631    5712 command_runner.go:130] ! I0318 12:43:29.375862       1 handler.go:232] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0318 12:44:39.896631    5712 command_runner.go:130] ! W0318 12:43:29.375929       1 genericapiserver.go:744] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:39.896631    5712 command_runner.go:130] ! W0318 12:43:29.375979       1 genericapiserver.go:744] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:39.896631    5712 command_runner.go:130] ! I0318 12:43:29.376693       1 handler.go:232] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0318 12:44:39.896631    5712 command_runner.go:130] ! I0318 12:43:29.384185       1 handler.go:232] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0318 12:44:39.896631    5712 command_runner.go:130] ! W0318 12:43:29.384228       1 genericapiserver.go:744] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:39.897352    5712 command_runner.go:130] ! W0318 12:43:29.384236       1 genericapiserver.go:744] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:39.897352    5712 command_runner.go:130] ! I0318 12:43:29.385110       1 handler.go:232] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0318 12:44:39.897352    5712 command_runner.go:130] ! W0318 12:43:29.385148       1 genericapiserver.go:744] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:39.897352    5712 command_runner.go:130] ! W0318 12:43:29.385155       1 genericapiserver.go:744] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:39.897352    5712 command_runner.go:130] ! I0318 12:43:29.388232       1 handler.go:232] Adding GroupVersion policy v1 to ResourceManager
	I0318 12:44:39.897352    5712 command_runner.go:130] ! W0318 12:43:29.388272       1 genericapiserver.go:744] Skipping API policy/v1beta1 because it has no resources.
	I0318 12:44:39.897352    5712 command_runner.go:130] ! I0318 12:43:29.392835       1 handler.go:232] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0318 12:44:39.897352    5712 command_runner.go:130] ! W0318 12:43:29.392872       1 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:39.897352    5712 command_runner.go:130] ! W0318 12:43:29.392880       1 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:39.897567    5712 command_runner.go:130] ! I0318 12:43:29.393504       1 handler.go:232] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0318 12:44:39.897567    5712 command_runner.go:130] ! W0318 12:43:29.393628       1 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:39.897567    5712 command_runner.go:130] ! W0318 12:43:29.393636       1 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:39.897567    5712 command_runner.go:130] ! I0318 12:43:29.401801       1 handler.go:232] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0318 12:44:39.897567    5712 command_runner.go:130] ! W0318 12:43:29.401838       1 genericapiserver.go:744] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:39.897567    5712 command_runner.go:130] ! W0318 12:43:29.401846       1 genericapiserver.go:744] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:39.897688    5712 command_runner.go:130] ! I0318 12:43:29.405508       1 handler.go:232] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0318 12:44:39.897688    5712 command_runner.go:130] ! I0318 12:43:29.409452       1 handler.go:232] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta2 to ResourceManager
	I0318 12:44:39.897688    5712 command_runner.go:130] ! W0318 12:43:29.409492       1 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:39.897688    5712 command_runner.go:130] ! W0318 12:43:29.409500       1 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:39.897688    5712 command_runner.go:130] ! I0318 12:43:29.421682       1 handler.go:232] Adding GroupVersion apps v1 to ResourceManager
	I0318 12:44:39.897688    5712 command_runner.go:130] ! W0318 12:43:29.421819       1 genericapiserver.go:744] Skipping API apps/v1beta2 because it has no resources.
	I0318 12:44:39.897688    5712 command_runner.go:130] ! W0318 12:43:29.421829       1 genericapiserver.go:744] Skipping API apps/v1beta1 because it has no resources.
	I0318 12:44:39.897688    5712 command_runner.go:130] ! I0318 12:43:29.426368       1 handler.go:232] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0318 12:44:39.897688    5712 command_runner.go:130] ! W0318 12:43:29.426405       1 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:39.897688    5712 command_runner.go:130] ! W0318 12:43:29.426413       1 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:39.897688    5712 command_runner.go:130] ! I0318 12:43:29.427337       1 handler.go:232] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0318 12:44:39.897688    5712 command_runner.go:130] ! W0318 12:43:29.427376       1 genericapiserver.go:744] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:39.897688    5712 command_runner.go:130] ! I0318 12:43:29.459555       1 handler.go:232] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0318 12:44:39.897846    5712 command_runner.go:130] ! W0318 12:43:29.459595       1 genericapiserver.go:744] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:39.897872    5712 command_runner.go:130] ! I0318 12:43:30.367734       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 12:44:39.898207    5712 command_runner.go:130] ! I0318 12:43:30.367932       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 12:44:39.898207    5712 command_runner.go:130] ! I0318 12:43:30.368782       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0318 12:44:39.898271    5712 command_runner.go:130] ! I0318 12:43:30.370542       1 secure_serving.go:213] Serving securely on [::]:8443
	I0318 12:44:39.898271    5712 command_runner.go:130] ! I0318 12:43:30.370628       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 12:44:39.898316    5712 command_runner.go:130] ! I0318 12:43:30.371667       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.372321       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.372682       1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.373559       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.373947       1 handler_discovery.go:412] Starting ResourceDiscoveryManager
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.374159       1 available_controller.go:423] Starting AvailableConditionController
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.374194       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.374404       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.374979       1 aggregator.go:164] waiting for initial CRD sync...
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.375087       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.375452       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.376837       1 controller.go:116] Starting legacy_token_tracking_controller
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.377105       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.377485       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.378013       1 controller.go:78] Starting OpenAPI AggregationController
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.378732       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.379224       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.379834       1 apf_controller.go:372] Starting API Priority and Fairness config controller
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.380470       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.380848       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.382047       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.382230       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.383964       1 controller.go:134] Starting OpenAPI controller
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.384158       1 controller.go:85] Starting OpenAPI V3 controller
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.384420       1 naming_controller.go:291] Starting NamingConditionController
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.384790       1 establishing_controller.go:76] Starting EstablishingController
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.385986       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.386163       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.386327       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.474963       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.476622       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.496736       1 shared_informer.go:318] Caches are synced for configmaps
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.497067       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.497511       1 aggregator.go:166] initial CRD sync complete...
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.498503       1 autoregister_controller.go:141] Starting autoregister controller
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.498662       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.498825       1 cache.go:39] Caches are synced for autoregister controller
	I0318 12:44:39.898875    5712 command_runner.go:130] ! I0318 12:43:30.570075       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0318 12:44:39.898875    5712 command_runner.go:130] ! I0318 12:43:30.585880       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0318 12:44:39.898920    5712 command_runner.go:130] ! I0318 12:43:30.624565       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0318 12:44:39.898920    5712 command_runner.go:130] ! I0318 12:43:30.681515       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0318 12:44:39.899048    5712 command_runner.go:130] ! I0318 12:43:30.681604       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0318 12:44:39.899048    5712 command_runner.go:130] ! I0318 12:43:31.410513       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0318 12:44:39.899121    5712 command_runner.go:130] ! W0318 12:43:31.917736       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.25.148.129 172.25.151.112]
	I0318 12:44:39.899121    5712 command_runner.go:130] ! I0318 12:43:31.919293       1 controller.go:624] quota admission added evaluator for: endpoints
	I0318 12:44:39.899121    5712 command_runner.go:130] ! I0318 12:43:31.929122       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0318 12:44:39.899121    5712 command_runner.go:130] ! I0318 12:43:34.160688       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0318 12:44:39.899121    5712 command_runner.go:130] ! I0318 12:43:34.367742       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0318 12:44:39.899121    5712 command_runner.go:130] ! I0318 12:43:34.406080       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0318 12:44:39.899121    5712 command_runner.go:130] ! I0318 12:43:34.542647       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0318 12:44:39.899121    5712 command_runner.go:130] ! I0318 12:43:34.562855       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0318 12:44:39.899121    5712 command_runner.go:130] ! W0318 12:43:51.920595       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.25.148.129]
	I0318 12:44:39.908056    5712 logs.go:123] Gathering logs for kube-controller-manager [a54be4436901] ...
	I0318 12:44:39.908056    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54be4436901"
	I0318 12:44:39.946689    5712 command_runner.go:130] ! I0318 12:18:43.818653       1 serving.go:348] Generated self-signed cert in-memory
	I0318 12:44:39.946689    5712 command_runner.go:130] ! I0318 12:18:45.050029       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0318 12:44:39.946689    5712 command_runner.go:130] ! I0318 12:18:45.050365       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:39.946689    5712 command_runner.go:130] ! I0318 12:18:45.053707       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0318 12:44:39.946689    5712 command_runner.go:130] ! I0318 12:18:45.056733       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 12:44:39.946689    5712 command_runner.go:130] ! I0318 12:18:45.057073       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 12:44:39.946689    5712 command_runner.go:130] ! I0318 12:18:45.057232       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 12:44:39.947548    5712 command_runner.go:130] ! I0318 12:18:49.569825       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0318 12:44:39.947548    5712 command_runner.go:130] ! I0318 12:18:49.602388       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0318 12:44:39.947548    5712 command_runner.go:130] ! I0318 12:18:49.603663       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0318 12:44:39.947619    5712 command_runner.go:130] ! I0318 12:18:49.603680       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0318 12:44:39.947619    5712 command_runner.go:130] ! I0318 12:18:49.621364       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0318 12:44:39.947659    5712 command_runner.go:130] ! I0318 12:18:49.621624       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 12:44:39.947659    5712 command_runner.go:130] ! I0318 12:18:49.621432       1 controllermanager.go:642] "Started controller" controller="garbage-collector-controller"
	I0318 12:44:39.947659    5712 command_runner.go:130] ! I0318 12:18:49.622281       1 graph_builder.go:294] "Running" component="GraphBuilder"
	I0318 12:44:39.947659    5712 command_runner.go:130] ! I0318 12:18:49.644362       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I0318 12:44:39.947727    5712 command_runner.go:130] ! I0318 12:18:49.644758       1 stateful_set.go:161] "Starting stateful set controller"
	I0318 12:44:39.947727    5712 command_runner.go:130] ! I0318 12:18:49.646607       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0318 12:44:39.947727    5712 command_runner.go:130] ! I0318 12:18:49.660400       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0318 12:44:39.947825    5712 command_runner.go:130] ! I0318 12:18:49.661053       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0318 12:44:39.947825    5712 command_runner.go:130] ! I0318 12:18:49.670023       1 shared_informer.go:318] Caches are synced for tokens
	I0318 12:44:39.947825    5712 command_runner.go:130] ! I0318 12:18:49.679784       1 controllermanager.go:642] "Started controller" controller="persistentvolume-expander-controller"
	I0318 12:44:39.947825    5712 command_runner.go:130] ! I0318 12:18:49.680015       1 expand_controller.go:328] "Starting expand controller"
	I0318 12:44:39.947825    5712 command_runner.go:130] ! I0318 12:18:49.680028       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0318 12:44:39.947910    5712 command_runner.go:130] ! I0318 12:18:49.692925       1 controllermanager.go:642] "Started controller" controller="clusterrole-aggregation-controller"
	I0318 12:44:39.947910    5712 command_runner.go:130] ! I0318 12:18:49.693164       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0318 12:44:39.947910    5712 command_runner.go:130] ! I0318 12:18:49.693449       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0318 12:44:39.947910    5712 command_runner.go:130] ! I0318 12:18:49.727464       1 controllermanager.go:642] "Started controller" controller="serviceaccount-controller"
	I0318 12:44:39.947910    5712 command_runner.go:130] ! I0318 12:18:49.727835       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0318 12:44:39.947982    5712 command_runner.go:130] ! I0318 12:18:49.727848       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0318 12:44:39.947982    5712 command_runner.go:130] ! I0318 12:18:49.742409       1 controllermanager.go:642] "Started controller" controller="disruption-controller"
	I0318 12:44:39.947982    5712 command_runner.go:130] ! I0318 12:18:49.743029       1 disruption.go:433] "Sending events to api server."
	I0318 12:44:39.947982    5712 command_runner.go:130] ! I0318 12:18:49.743301       1 disruption.go:444] "Starting disruption controller"
	I0318 12:44:39.947982    5712 command_runner.go:130] ! I0318 12:18:49.743449       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0318 12:44:39.947982    5712 command_runner.go:130] ! I0318 12:18:49.759716       1 controllermanager.go:642] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0318 12:44:39.948058    5712 command_runner.go:130] ! I0318 12:18:49.760338       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0318 12:44:39.948058    5712 command_runner.go:130] ! I0318 12:18:49.760376       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0318 12:44:39.948202    5712 command_runner.go:130] ! I0318 12:18:49.829809       1 controllermanager.go:642] "Started controller" controller="endpointslice-mirroring-controller"
	I0318 12:44:39.948202    5712 command_runner.go:130] ! I0318 12:18:49.830343       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0318 12:44:39.948284    5712 command_runner.go:130] ! I0318 12:18:49.830415       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0318 12:44:39.948373    5712 command_runner.go:130] ! I0318 12:18:50.085725       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I0318 12:44:39.948373    5712 command_runner.go:130] ! I0318 12:18:50.086016       1 namespace_controller.go:197] "Starting namespace controller"
	I0318 12:44:39.948462    5712 command_runner.go:130] ! I0318 12:18:50.086167       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0318 12:44:39.948462    5712 command_runner.go:130] ! I0318 12:18:50.234974       1 controllermanager.go:642] "Started controller" controller="replicaset-controller"
	I0318 12:44:39.948537    5712 command_runner.go:130] ! I0318 12:18:50.242121       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0318 12:44:39.948567    5712 command_runner.go:130] ! I0318 12:18:50.242138       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0318 12:44:39.948567    5712 command_runner.go:130] ! I0318 12:18:50.384031       1 controllermanager.go:642] "Started controller" controller="token-cleaner-controller"
	I0318 12:44:39.948645    5712 command_runner.go:130] ! I0318 12:18:50.384090       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0318 12:44:39.948645    5712 command_runner.go:130] ! I0318 12:18:50.384100       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0318 12:44:39.948708    5712 command_runner.go:130] ! I0318 12:18:50.384108       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0318 12:44:39.948708    5712 command_runner.go:130] ! I0318 12:18:50.530182       1 controllermanager.go:642] "Started controller" controller="persistentvolume-protection-controller"
	I0318 12:44:39.948708    5712 command_runner.go:130] ! I0318 12:18:50.530258       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0318 12:44:39.948786    5712 command_runner.go:130] ! I0318 12:18:50.530267       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0318 12:44:39.948814    5712 command_runner.go:130] ! I0318 12:18:50.695232       1 controllermanager.go:642] "Started controller" controller="job-controller"
	I0318 12:44:39.948814    5712 command_runner.go:130] ! I0318 12:18:50.695351       1 job_controller.go:226] "Starting job controller"
	I0318 12:44:39.948814    5712 command_runner.go:130] ! I0318 12:18:50.695361       1 shared_informer.go:311] Waiting for caches to sync for job
	I0318 12:44:39.948814    5712 command_runner.go:130] ! I0318 12:18:50.833418       1 controllermanager.go:642] "Started controller" controller="deployment-controller"
	I0318 12:44:39.948939    5712 command_runner.go:130] ! I0318 12:18:50.833674       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0318 12:44:39.948939    5712 command_runner.go:130] ! I0318 12:18:50.833686       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0318 12:44:39.949052    5712 command_runner.go:130] ! I0318 12:18:50.998838       1 controllermanager.go:642] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0318 12:44:39.949083    5712 command_runner.go:130] ! I0318 12:18:50.999193       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0318 12:44:39.949083    5712 command_runner.go:130] ! I0318 12:18:50.999227       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0318 12:44:39.949149    5712 command_runner.go:130] ! I0318 12:18:51.141445       1 controllermanager.go:642] "Started controller" controller="ttl-after-finished-controller"
	I0318 12:44:39.949195    5712 command_runner.go:130] ! I0318 12:18:51.141508       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0318 12:44:39.949246    5712 command_runner.go:130] ! I0318 12:18:51.141518       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0318 12:44:39.949343    5712 command_runner.go:130] ! I0318 12:18:51.279642       1 controllermanager.go:642] "Started controller" controller="pod-garbage-collector-controller"
	I0318 12:44:39.949477    5712 command_runner.go:130] ! I0318 12:18:51.279728       1 gc_controller.go:101] "Starting GC controller"
	I0318 12:44:39.949504    5712 command_runner.go:130] ! I0318 12:18:51.279742       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0318 12:44:39.949504    5712 command_runner.go:130] ! I0318 12:18:51.429394       1 controllermanager.go:642] "Started controller" controller="cronjob-controller"
	I0318 12:44:39.949583    5712 command_runner.go:130] ! I0318 12:18:51.429600       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0318 12:44:39.949633    5712 command_runner.go:130] ! I0318 12:18:51.429612       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0318 12:44:39.949687    5712 command_runner.go:130] ! I0318 12:19:01.598915       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0318 12:44:39.949716    5712 command_runner.go:130] ! I0318 12:19:01.598966       1 controllermanager.go:642] "Started controller" controller="node-ipam-controller"
	I0318 12:44:39.949787    5712 command_runner.go:130] ! I0318 12:19:01.599163       1 node_ipam_controller.go:162] "Starting ipam controller"
	I0318 12:44:39.949787    5712 command_runner.go:130] ! I0318 12:19:01.599174       1 shared_informer.go:311] Waiting for caches to sync for node
	I0318 12:44:39.949787    5712 command_runner.go:130] ! I0318 12:19:01.601488       1 node_lifecycle_controller.go:431] "Controller will reconcile labels"
	I0318 12:44:39.949931    5712 command_runner.go:130] ! I0318 12:19:01.601803       1 controllermanager.go:642] "Started controller" controller="node-lifecycle-controller"
	I0318 12:44:39.949931    5712 command_runner.go:130] ! I0318 12:19:01.601987       1 node_lifecycle_controller.go:465] "Sending events to api server"
	I0318 12:44:39.949931    5712 command_runner.go:130] ! I0318 12:19:01.602013       1 node_lifecycle_controller.go:476] "Starting node controller"
	I0318 12:44:39.949931    5712 command_runner.go:130] ! I0318 12:19:01.602019       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0318 12:44:39.949931    5712 command_runner.go:130] ! I0318 12:19:01.623744       1 controllermanager.go:642] "Started controller" controller="persistentvolume-binder-controller"
	I0318 12:44:39.949931    5712 command_runner.go:130] ! I0318 12:19:01.624435       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0318 12:44:39.949931    5712 command_runner.go:130] ! I0318 12:19:01.624966       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0318 12:44:39.949931    5712 command_runner.go:130] ! I0318 12:19:01.663430       1 controllermanager.go:642] "Started controller" controller="endpointslice-controller"
	I0318 12:44:39.949931    5712 command_runner.go:130] ! I0318 12:19:01.663839       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0318 12:44:39.949931    5712 command_runner.go:130] ! I0318 12:19:01.663858       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0318 12:44:39.949931    5712 command_runner.go:130] ! I0318 12:19:01.710104       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0318 12:44:39.949931    5712 command_runner.go:130] ! I0318 12:19:01.710384       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0318 12:44:39.949931    5712 command_runner.go:130] ! I0318 12:19:01.710455       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0318 12:44:39.949931    5712 command_runner.go:130] ! I0318 12:19:01.710487       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0318 12:44:39.949931    5712 command_runner.go:130] ! I0318 12:19:01.710760       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0318 12:44:39.949931    5712 command_runner.go:130] ! I0318 12:19:01.710795       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0318 12:44:39.949931    5712 command_runner.go:130] ! I0318 12:19:01.710822       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0318 12:44:39.950470    5712 command_runner.go:130] ! I0318 12:19:01.710886       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0318 12:44:39.950470    5712 command_runner.go:130] ! I0318 12:19:01.710930       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0318 12:44:39.950517    5712 command_runner.go:130] ! I0318 12:19:01.710986       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0318 12:44:39.950517    5712 command_runner.go:130] ! I0318 12:19:01.711095       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0318 12:44:39.950517    5712 command_runner.go:130] ! I0318 12:19:01.711137       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0318 12:44:39.950563    5712 command_runner.go:130] ! I0318 12:19:01.711160       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0318 12:44:39.950563    5712 command_runner.go:130] ! I0318 12:19:01.711179       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0318 12:44:39.950563    5712 command_runner.go:130] ! I0318 12:19:01.711211       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0318 12:44:39.950563    5712 command_runner.go:130] ! I0318 12:19:01.711237       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0318 12:44:39.950563    5712 command_runner.go:130] ! I0318 12:19:01.711261       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0318 12:44:39.950563    5712 command_runner.go:130] ! I0318 12:19:01.711286       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0318 12:44:39.950638    5712 command_runner.go:130] ! I0318 12:19:01.711316       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0318 12:44:39.950638    5712 command_runner.go:130] ! I0318 12:19:01.711339       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0318 12:44:39.950638    5712 command_runner.go:130] ! I0318 12:19:01.711356       1 controllermanager.go:642] "Started controller" controller="resourcequota-controller"
	I0318 12:44:39.950638    5712 command_runner.go:130] ! I0318 12:19:01.711486       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0318 12:44:39.950720    5712 command_runner.go:130] ! I0318 12:19:01.711654       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 12:44:39.950720    5712 command_runner.go:130] ! I0318 12:19:01.711784       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0318 12:44:39.950757    5712 command_runner.go:130] ! I0318 12:19:01.715155       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:01.715586       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:01.715886       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:01.732340       1 controllermanager.go:642] "Started controller" controller="ttl-controller"
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:01.732695       1 ttl_controller.go:124] "Starting TTL controller"
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:01.732944       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:01.747011       1 controllermanager.go:642] "Started controller" controller="endpoints-controller"
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:01.747361       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:01.747484       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0318 12:44:39.950787    5712 command_runner.go:130] ! E0318 12:19:01.771424       1 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:01.771527       1 controllermanager.go:620] "Warning: skipping controller" controller="service-lb-controller"
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:01.771544       1 core.go:228] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:01.772072       1 controllermanager.go:620] "Warning: skipping controller" controller="node-route-controller"
	I0318 12:44:39.950787    5712 command_runner.go:130] ! E0318 12:19:01.775461       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:01.775656       1 controllermanager.go:620] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:01.788795       1 controllermanager.go:642] "Started controller" controller="ephemeral-volume-controller"
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:01.789335       1 controller.go:169] "Starting ephemeral volume controller"
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:01.789368       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:01.809091       1 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:01.809368       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:01.809720       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:01.846190       1 controllermanager.go:642] "Started controller" controller="daemonset-controller"
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:01.846779       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:01.846879       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:02.137994       1 controllermanager.go:642] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:02.138059       1 horizontal.go:200] "Starting HPA controller"
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:02.138069       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:02.189502       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:02.189864       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:02.190041       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:02.191172       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:02.191256       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:02.191347       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:39.951353    5712 command_runner.go:130] ! I0318 12:19:02.193057       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0318 12:44:39.951353    5712 command_runner.go:130] ! I0318 12:19:02.193152       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.193246       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.194807       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.194851       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.195648       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.194886       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.345061       1 controllermanager.go:642] "Started controller" controller="bootstrap-signer-controller"
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.347311       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.364524       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.380069       1 shared_informer.go:318] Caches are synced for expand
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.390503       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.391317       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.393201       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.402532       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.419971       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.421082       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600\" does not exist"
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.427201       1 shared_informer.go:318] Caches are synced for persistent volume
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.427876       1 shared_informer.go:318] Caches are synced for service account
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.429003       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.429629       1 shared_informer.go:318] Caches are synced for cronjob
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.430311       1 shared_informer.go:318] Caches are synced for PV protection
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.432115       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.434603       1 shared_informer.go:318] Caches are synced for TTL
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.437362       1 shared_informer.go:318] Caches are synced for deployment
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.438306       1 shared_informer.go:318] Caches are synced for HPA
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.441785       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.442916       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.444302       1 shared_informer.go:318] Caches are synced for disruption
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.447137       1 shared_informer.go:318] Caches are synced for daemon sets
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.447694       1 shared_informer.go:318] Caches are synced for endpoint
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.452098       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.454023       1 shared_informer.go:318] Caches are synced for stateful set
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.461158       1 shared_informer.go:318] Caches are synced for crt configmap
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.464623       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.480847       1 shared_informer.go:318] Caches are synced for GC
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.487772       1 shared_informer.go:318] Caches are synced for namespace
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.490082       1 shared_informer.go:318] Caches are synced for ephemeral
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.494160       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.499312       1 shared_informer.go:318] Caches are synced for node
	I0318 12:44:39.951945    5712 command_runner.go:130] ! I0318 12:19:02.499587       1 range_allocator.go:174] "Sending events to api server"
	I0318 12:44:39.951945    5712 command_runner.go:130] ! I0318 12:19:02.499772       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0318 12:44:39.951945    5712 command_runner.go:130] ! I0318 12:19:02.500365       1 shared_informer.go:318] Caches are synced for attach detach
	I0318 12:44:39.951945    5712 command_runner.go:130] ! I0318 12:19:02.500954       1 shared_informer.go:318] Caches are synced for job
	I0318 12:44:39.951945    5712 command_runner.go:130] ! I0318 12:19:02.501438       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0318 12:44:39.951945    5712 command_runner.go:130] ! I0318 12:19:02.501724       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0318 12:44:39.951945    5712 command_runner.go:130] ! I0318 12:19:02.503931       1 shared_informer.go:318] Caches are synced for PVC protection
	I0318 12:44:39.952089    5712 command_runner.go:130] ! I0318 12:19:02.509883       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0318 12:44:39.952089    5712 command_runner.go:130] ! I0318 12:19:02.528934       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-642600" podCIDRs=["10.244.0.0/24"]
	I0318 12:44:39.952089    5712 command_runner.go:130] ! I0318 12:19:02.565942       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 12:44:39.952089    5712 command_runner.go:130] ! I0318 12:19:02.603468       1 shared_informer.go:318] Caches are synced for taint
	I0318 12:44:39.952089    5712 command_runner.go:130] ! I0318 12:19:02.603627       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0318 12:44:39.952089    5712 command_runner.go:130] ! I0318 12:19:02.603721       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-642600"
	I0318 12:44:39.952178    5712 command_runner.go:130] ! I0318 12:19:02.603760       1 node_lifecycle_controller.go:1029] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0318 12:44:39.952178    5712 command_runner.go:130] ! I0318 12:19:02.603782       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0318 12:44:39.952178    5712 command_runner.go:130] ! I0318 12:19:02.603821       1 taint_manager.go:210] "Sending events to api server"
	I0318 12:44:39.952178    5712 command_runner.go:130] ! I0318 12:19:02.605481       1 event.go:307] "Event occurred" object="multinode-642600" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600 event: Registered Node multinode-642600 in Controller"
	I0318 12:44:39.952178    5712 command_runner.go:130] ! I0318 12:19:02.613688       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 12:44:39.952245    5712 command_runner.go:130] ! I0318 12:19:02.644197       1 event.go:307] "Event occurred" object="kube-system/etcd-multinode-642600" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:39.952245    5712 command_runner.go:130] ! I0318 12:19:02.675188       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-multinode-642600" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:39.952245    5712 command_runner.go:130] ! I0318 12:19:02.675510       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-multinode-642600" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:39.952360    5712 command_runner.go:130] ! I0318 12:19:02.681286       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-multinode-642600" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:39.952360    5712 command_runner.go:130] ! I0318 12:19:03.023915       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:03.023946       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:03.029139       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:03.075135       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:03.175071       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-kpt4f"
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:03.181384       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-4dg79"
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:03.624405       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-fgn7v"
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:03.691902       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-xkgdt"
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:03.810454       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="734.97569ms"
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:03.847906       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="37.087083ms"
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:03.945758       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="97.729709ms"
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:03.945958       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="107.501µs"
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:04.640409       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:04.732241       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-xkgdt"
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:04.763359       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="121.567183ms"
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:04.828298       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="64.870031ms"
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:04.890459       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.083804ms"
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:04.890764       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.4µs"
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:15.938090       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="157.9µs"
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:15.982953       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="121.301µs"
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:17.607464       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:19.208242       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="102.7µs"
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:19.274086       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="22.124146ms"
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:19.275145       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="211.9µs"
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:22:12.652722       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600-m02\" does not exist"
	I0318 12:44:39.952920    5712 command_runner.go:130] ! I0318 12:22:12.679760       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-642600-m02" podCIDRs=["10.244.1.0/24"]
	I0318 12:44:39.952920    5712 command_runner.go:130] ! I0318 12:22:12.706735       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-d5llj"
	I0318 12:44:39.952920    5712 command_runner.go:130] ! I0318 12:22:12.706774       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-vts9f"
	I0318 12:44:39.952920    5712 command_runner.go:130] ! I0318 12:22:17.642129       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-642600-m02"
	I0318 12:44:39.953018    5712 command_runner.go:130] ! I0318 12:22:17.642212       1 event.go:307] "Event occurred" object="multinode-642600-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600-m02 event: Registered Node multinode-642600-m02 in Controller"
	I0318 12:44:39.953045    5712 command_runner.go:130] ! I0318 12:22:34.263318       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:39.953045    5712 command_runner.go:130] ! I0318 12:23:01.851486       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5b5d89c9d6 to 2"
	I0318 12:44:39.953045    5712 command_runner.go:130] ! I0318 12:23:01.881281       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-hmhdf"
	I0318 12:44:39.953144    5712 command_runner.go:130] ! I0318 12:23:01.924301       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-48qkw"
	I0318 12:44:39.953144    5712 command_runner.go:130] ! I0318 12:23:01.946058       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="91.676064ms"
	I0318 12:44:39.953144    5712 command_runner.go:130] ! I0318 12:23:02.049702       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="103.251772ms"
	I0318 12:44:39.953144    5712 command_runner.go:130] ! I0318 12:23:02.049789       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="35.4µs"
	I0318 12:44:39.953224    5712 command_runner.go:130] ! I0318 12:23:04.783277       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="15.030749ms"
	I0318 12:44:39.953224    5712 command_runner.go:130] ! I0318 12:23:04.783520       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="39.9µs"
	I0318 12:44:39.953224    5712 command_runner.go:130] ! I0318 12:23:05.441638       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="14.350047ms"
	I0318 12:44:39.953305    5712 command_runner.go:130] ! I0318 12:23:05.441876       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="105µs"
	I0318 12:44:39.953393    5712 command_runner.go:130] ! I0318 12:27:09.073772       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600-m03\" does not exist"
	I0318 12:44:39.953393    5712 command_runner.go:130] ! I0318 12:27:09.075345       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:39.953393    5712 command_runner.go:130] ! I0318 12:27:09.095707       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-642600-m03" podCIDRs=["10.244.2.0/24"]
	I0318 12:44:39.953393    5712 command_runner.go:130] ! I0318 12:27:09.110695       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-khbjt"
	I0318 12:44:39.953393    5712 command_runner.go:130] ! I0318 12:27:09.110730       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-thkjp"
	I0318 12:44:39.953521    5712 command_runner.go:130] ! I0318 12:27:12.715112       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-642600-m03"
	I0318 12:44:39.953521    5712 command_runner.go:130] ! I0318 12:27:12.715611       1 event.go:307] "Event occurred" object="multinode-642600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600-m03 event: Registered Node multinode-642600-m03 in Controller"
	I0318 12:44:39.953586    5712 command_runner.go:130] ! I0318 12:27:30.856729       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:39.953614    5712 command_runner.go:130] ! I0318 12:35:52.853028       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:39.953614    5712 command_runner.go:130] ! I0318 12:35:52.854041       1 event.go:307] "Event occurred" object="multinode-642600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-642600-m03 status is now: NodeNotReady"
	I0318 12:44:39.953614    5712 command_runner.go:130] ! I0318 12:35:52.871920       1 event.go:307] "Event occurred" object="kube-system/kindnet-thkjp" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:39.953675    5712 command_runner.go:130] ! I0318 12:35:52.891158       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-khbjt" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:39.953675    5712 command_runner.go:130] ! I0318 12:38:40.101072       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:39.953675    5712 command_runner.go:130] ! I0318 12:38:42.930337       1 event.go:307] "Event occurred" object="multinode-642600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-642600-m03 event: Removing Node multinode-642600-m03 from Controller"
	I0318 12:44:39.953779    5712 command_runner.go:130] ! I0318 12:38:46.825246       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:39.953838    5712 command_runner.go:130] ! I0318 12:38:46.827225       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600-m03\" does not exist"
	I0318 12:44:39.953838    5712 command_runner.go:130] ! I0318 12:38:46.865011       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-642600-m03" podCIDRs=["10.244.3.0/24"]
	I0318 12:44:39.953838    5712 command_runner.go:130] ! I0318 12:38:47.931681       1 event.go:307] "Event occurred" object="multinode-642600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600-m03 event: Registered Node multinode-642600-m03 in Controller"
	I0318 12:44:39.953838    5712 command_runner.go:130] ! I0318 12:38:52.975724       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:39.953969    5712 command_runner.go:130] ! I0318 12:40:33.280094       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:39.954063    5712 command_runner.go:130] ! I0318 12:40:33.281180       1 event.go:307] "Event occurred" object="multinode-642600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-642600-m03 status is now: NodeNotReady"
	I0318 12:44:39.954129    5712 command_runner.go:130] ! I0318 12:40:33.601041       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-khbjt" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:39.954248    5712 command_runner.go:130] ! I0318 12:40:33.698293       1 event.go:307] "Event occurred" object="kube-system/kindnet-thkjp" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:39.974432    5712 logs.go:123] Gathering logs for container status ...
	I0318 12:44:39.974432    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 12:44:40.071437    5712 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0318 12:44:40.071541    5712 command_runner.go:130] > 566e40ce923f7       8c811b4aec35f                                                                                         4 seconds ago        Running             busybox                   1                   e1b2432b0ed66       busybox-5b5d89c9d6-48qkw
	I0318 12:44:40.071541    5712 command_runner.go:130] > fcf17db92b351       ead0a4a53df89                                                                                         5 seconds ago        Running             coredns                   1                   1090dd5740980       coredns-5dd5756b68-fgn7v
	I0318 12:44:40.071541    5712 command_runner.go:130] > 4652c26c0904e       6e38f40d628db                                                                                         23 seconds ago       Running             storage-provisioner       2                   889c16eb0ab73       storage-provisioner
	I0318 12:44:40.071716    5712 command_runner.go:130] > 9fec05a61d2a9       4950bb10b3f87                                                                                         About a minute ago   Running             kindnet-cni               1                   5ecbdcbdad3fa       kindnet-kpt4f
	I0318 12:44:40.071760    5712 command_runner.go:130] > 787ade2ea2cd0       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   889c16eb0ab73       storage-provisioner
	I0318 12:44:40.071815    5712 command_runner.go:130] > 575b41a3a85a4       83f6cc407eed8                                                                                         About a minute ago   Running             kube-proxy                1                   7a2f0ccaf5c4c       kube-proxy-4dg79
	I0318 12:44:40.071839    5712 command_runner.go:130] > a48a6d310b868       7fe0e6f37db33                                                                                         About a minute ago   Running             kube-apiserver            0                   a7281d6e698ea       kube-apiserver-multinode-642600
	I0318 12:44:40.071929    5712 command_runner.go:130] > 14ae9398d33b1       d058aa5ab969c                                                                                         About a minute ago   Running             kube-controller-manager   1                   eca6768355c74       kube-controller-manager-multinode-642600
	I0318 12:44:40.071954    5712 command_runner.go:130] > bd1e4f4d262e3       e3db313c6dbc0                                                                                         About a minute ago   Running             kube-scheduler            1                   f62197122538f       kube-scheduler-multinode-642600
	I0318 12:44:40.071954    5712 command_runner.go:130] > 8e7911b58c587       73deb9a3f7025                                                                                         About a minute ago   Running             etcd                      0                   67004ee038ee4       etcd-multinode-642600
	I0318 12:44:40.071954    5712 command_runner.go:130] > a8dd2eacb7251       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   21 minutes ago       Exited              busybox                   0                   29bb4d534c2e2       busybox-5b5d89c9d6-48qkw
	I0318 12:44:40.071954    5712 command_runner.go:130] > e81f1d2fdb360       ead0a4a53df89                                                                                         25 minutes ago       Exited              coredns                   0                   ed38da653fbef       coredns-5dd5756b68-fgn7v
	I0318 12:44:40.071954    5712 command_runner.go:130] > 5cf42651cb21d       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              25 minutes ago       Exited              kindnet-cni               0                   fef37141be6db       kindnet-kpt4f
	I0318 12:44:40.071954    5712 command_runner.go:130] > 4bbad08fe59ac       83f6cc407eed8                                                                                         25 minutes ago       Exited              kube-proxy                0                   2f4709a3a45a4       kube-proxy-4dg79
	I0318 12:44:40.071954    5712 command_runner.go:130] > a54be44369019       d058aa5ab969c                                                                                         25 minutes ago       Exited              kube-controller-manager   0                   d766c4514f0bf       kube-controller-manager-multinode-642600
	I0318 12:44:40.071954    5712 command_runner.go:130] > 47777d4c0b90d       e3db313c6dbc0                                                                                         25 minutes ago       Exited              kube-scheduler            0                   3500a9f1ca84e       kube-scheduler-multinode-642600
	I0318 12:44:40.074384    5712 logs.go:123] Gathering logs for kubelet ...
	I0318 12:44:40.074437    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 12:44:40.106622    5712 command_runner.go:130] > Mar 18 12:43:20 multinode-642600 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0318 12:44:40.106622    5712 command_runner.go:130] > Mar 18 12:43:20 multinode-642600 kubelet[1388]: I0318 12:43:20.841405    1388 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0318 12:44:40.106622    5712 command_runner.go:130] > Mar 18 12:43:20 multinode-642600 kubelet[1388]: I0318 12:43:20.841736    1388 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:40.106622    5712 command_runner.go:130] > Mar 18 12:43:20 multinode-642600 kubelet[1388]: I0318 12:43:20.842325    1388 server.go:895] "Client rotation is on, will bootstrap in background"
	I0318 12:44:40.106728    5712 command_runner.go:130] > Mar 18 12:43:20 multinode-642600 kubelet[1388]: E0318 12:43:20.842583    1388 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0318 12:44:40.106728    5712 command_runner.go:130] > Mar 18 12:43:20 multinode-642600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0318 12:44:40.106728    5712 command_runner.go:130] > Mar 18 12:43:20 multinode-642600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 kubelet[1445]: I0318 12:43:21.629315    1445 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 kubelet[1445]: I0318 12:43:21.629808    1445 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 kubelet[1445]: I0318 12:43:21.631096    1445 server.go:895] "Client rotation is on, will bootstrap in background"
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 kubelet[1445]: E0318 12:43:21.631229    1445 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:23 multinode-642600 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.100950    1523 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.101311    1523 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.101646    1523 server.go:895] "Client rotation is on, will bootstrap in background"
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.108175    1523 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.123413    1523 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.204504    1523 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.205069    1523 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.205344    1523 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","To
pologyManagerPolicyOptions":null}
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.205667    1523 topology_manager.go:138] "Creating topology manager with none policy"
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.205685    1523 container_manager_linux.go:301] "Creating device plugin manager"
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.206240    1523 state_mem.go:36] "Initialized new in-memory state store"
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.208674    1523 kubelet.go:393] "Attempting to sync node with API server"
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.208817    1523 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.209351    1523 kubelet.go:309] "Adding apiserver pod source"
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.209491    1523 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: W0318 12:43:24.212857    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-642600&limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:40.107366    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.213311    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-642600&limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:40.107366    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: W0318 12:43:24.219866    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:40.107366    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.220057    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:40.107366    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.240215    1523 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="docker" version="25.0.4" apiVersion="v1"
	I0318 12:44:40.107366    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: W0318 12:43:24.245761    1523 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0318 12:44:40.107551    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.248742    1523 server.go:1232] "Started kubelet"
	I0318 12:44:40.107551    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.249814    1523 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
	I0318 12:44:40.107551    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.251561    1523 server.go:462] "Adding debug handlers to kubelet server"
	I0318 12:44:40.107551    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.254285    1523 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10
	I0318 12:44:40.107551    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.255480    1523 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0318 12:44:40.107748    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.255659    1523 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"multinode-642600.17bddc6f5820f7a9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-642600", UID:"multinode-642600", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"multinode-642600"}, FirstTimestamp:time.Date(2024, ti
me.March, 18, 12, 43, 24, 248692649, time.Local), LastTimestamp:time.Date(2024, time.March, 18, 12, 43, 24, 248692649, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"multinode-642600"}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 172.25.148.129:8443: connect: connection refused'(may retry after sleeping)
	I0318 12:44:40.107748    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.259469    1523 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0318 12:44:40.107748    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.261490    1523 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0318 12:44:40.107748    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.265275    1523 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.270368    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-642600?timeout=10s\": dial tcp 172.25.148.129:8443: connect: connection refused" interval="200ms"
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: W0318 12:43:24.275611    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.275814    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.317069    1523 reconciler_new.go:29] "Reconciler: start to sync state"
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.327943    1523 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.327963    1523 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.327985    1523 state_mem.go:36] "Initialized new in-memory state store"
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.329007    1523 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.329047    1523 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.329057    1523 policy_none.go:49] "None policy: Start"
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.336597    1523 memory_manager.go:169] "Starting memorymanager" policy="None"
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.336631    1523 state_mem.go:35] "Initializing new in-memory state store"
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.337548    1523 state_mem.go:75] "Updated machine memory state"
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.345495    1523 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.348154    1523 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.351399    1523 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.355603    1523 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.356232    1523 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.357037    1523 kubelet.go:2303] "Starting kubelet main sync loop"
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.359069    1523 kubelet.go:2327] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: W0318 12:43:24.367050    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.367230    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.387242    1523 iptables.go:575] "Could not set up iptables canary" err=<
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0318 12:44:40.108352    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0318 12:44:40.108352    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0318 12:44:40.108352    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.387428    1523 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-642600\" not found"
	I0318 12:44:40.108352    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.399151    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-642600"
	I0318 12:44:40.108352    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.399841    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.148.129:8443: connect: connection refused" node="multinode-642600"
	I0318 12:44:40.108352    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.460339    1523 topology_manager.go:215] "Topology Admit Handler" podUID="d5f09afee1a6ef36657c1ae3335ddda6" podNamespace="kube-system" podName="etcd-multinode-642600"
	I0318 12:44:40.108483    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.472389    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-642600?timeout=10s\": dial tcp 172.25.148.129:8443: connect: connection refused" interval="400ms"
	I0318 12:44:40.108544    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.474475    1523 topology_manager.go:215] "Topology Admit Handler" podUID="624de65f019baf96d4a0e2fb6064e413" podNamespace="kube-system" podName="kube-apiserver-multinode-642600"
	I0318 12:44:40.108544    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.487469    1523 topology_manager.go:215] "Topology Admit Handler" podUID="a1608bc774d0b3e96e1b6fbbded5cb52" podNamespace="kube-system" podName="kube-controller-manager-multinode-642600"
	I0318 12:44:40.108544    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.500311    1523 topology_manager.go:215] "Topology Admit Handler" podUID="cf50844b540be8ed0b3e767db413ac8f" podNamespace="kube-system" podName="kube-scheduler-multinode-642600"
	I0318 12:44:40.108544    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.527553    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/d5f09afee1a6ef36657c1ae3335ddda6-etcd-certs\") pod \"etcd-multinode-642600\" (UID: \"d5f09afee1a6ef36657c1ae3335ddda6\") " pod="kube-system/etcd-multinode-642600"
	I0318 12:44:40.108544    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.527604    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/d5f09afee1a6ef36657c1ae3335ddda6-etcd-data\") pod \"etcd-multinode-642600\" (UID: \"d5f09afee1a6ef36657c1ae3335ddda6\") " pod="kube-system/etcd-multinode-642600"
	I0318 12:44:40.108544    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.534726    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed38da653fbefea9aeb0ebdb91f985394a7a792571704a4875018f5a6bc9abda"
	I0318 12:44:40.108544    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.534857    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d766c4514f0bf79902b72d04d9e3a09fc2bcf5ef330f41cd3e84e63f5151f2b6"
	I0318 12:44:40.108544    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.534873    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f100b1062a56929e04e6e4377055b065d93a28c504f060cce4695165a2c33db0"
	I0318 12:44:40.108544    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.534885    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a9b4c05a5ccd5364b8dac2797803c98520c4f98df0fba77af7521af64a15152"
	I0318 12:44:40.108544    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.534943    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f4709a3a45a45f0c67f457df8bb202ea2867cfedeaec4a164509190df13f510"
	I0318 12:44:40.108544    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.534961    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3500a9f1ca84ed3d58cdd473a0c7c47a59643858c05dfd90247a09b1b43302bd"
	I0318 12:44:40.108544    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.552869    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aad98ae0cd7c7708c7e02f0b23fc33f1ca2b404bd7fec324c21beefcbe17d009"
	I0318 12:44:40.108544    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.571969    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29bb4d534c2e2b00dfe907d4443637851e3c3348e31bf00939cd6efad71c4e2e"
	I0318 12:44:40.108544    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.589127    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fef37141be6db2ba71fd0f1d2feee00d6ab5d31d607323e4f5ffab4a3e70cfa5"
	I0318 12:44:40.108544    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.614112    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-642600"
	I0318 12:44:40.108544    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.616006    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.148.129:8443: connect: connection refused" node="multinode-642600"
	I0318 12:44:40.108544    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629143    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a1608bc774d0b3e96e1b6fbbded5cb52-flexvolume-dir\") pod \"kube-controller-manager-multinode-642600\" (UID: \"a1608bc774d0b3e96e1b6fbbded5cb52\") " pod="kube-system/kube-controller-manager-multinode-642600"
	I0318 12:44:40.108544    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629404    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a1608bc774d0b3e96e1b6fbbded5cb52-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-642600\" (UID: \"a1608bc774d0b3e96e1b6fbbded5cb52\") " pod="kube-system/kube-controller-manager-multinode-642600"
	I0318 12:44:40.108544    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629689    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/624de65f019baf96d4a0e2fb6064e413-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-642600\" (UID: \"624de65f019baf96d4a0e2fb6064e413\") " pod="kube-system/kube-apiserver-multinode-642600"
	I0318 12:44:40.108544    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629754    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a1608bc774d0b3e96e1b6fbbded5cb52-ca-certs\") pod \"kube-controller-manager-multinode-642600\" (UID: \"a1608bc774d0b3e96e1b6fbbded5cb52\") " pod="kube-system/kube-controller-manager-multinode-642600"
	I0318 12:44:40.109121    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629780    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a1608bc774d0b3e96e1b6fbbded5cb52-k8s-certs\") pod \"kube-controller-manager-multinode-642600\" (UID: \"a1608bc774d0b3e96e1b6fbbded5cb52\") " pod="kube-system/kube-controller-manager-multinode-642600"
	I0318 12:44:40.109121    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629802    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1608bc774d0b3e96e1b6fbbded5cb52-kubeconfig\") pod \"kube-controller-manager-multinode-642600\" (UID: \"a1608bc774d0b3e96e1b6fbbded5cb52\") " pod="kube-system/kube-controller-manager-multinode-642600"
	I0318 12:44:40.109350    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629825    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cf50844b540be8ed0b3e767db413ac8f-kubeconfig\") pod \"kube-scheduler-multinode-642600\" (UID: \"cf50844b540be8ed0b3e767db413ac8f\") " pod="kube-system/kube-scheduler-multinode-642600"
	I0318 12:44:40.109427    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629860    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/624de65f019baf96d4a0e2fb6064e413-ca-certs\") pod \"kube-apiserver-multinode-642600\" (UID: \"624de65f019baf96d4a0e2fb6064e413\") " pod="kube-system/kube-apiserver-multinode-642600"
	I0318 12:44:40.109501    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629919    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/624de65f019baf96d4a0e2fb6064e413-k8s-certs\") pod \"kube-apiserver-multinode-642600\" (UID: \"624de65f019baf96d4a0e2fb6064e413\") " pod="kube-system/kube-apiserver-multinode-642600"
	I0318 12:44:40.109501    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.875125    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-642600?timeout=10s\": dial tcp 172.25.148.129:8443: connect: connection refused" interval="800ms"
	I0318 12:44:40.109501    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: I0318 12:43:25.030740    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-642600"
	I0318 12:44:40.109501    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: E0318 12:43:25.031776    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.148.129:8443: connect: connection refused" node="multinode-642600"
	I0318 12:44:40.109501    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: W0318 12:43:25.266849    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:40.109501    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: E0318 12:43:25.266980    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:40.109501    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: I0318 12:43:25.674768    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7281d6e698ea2dc42d7d3093ccde32b770bf8367fdb58230694380f40daeb9f"
	I0318 12:44:40.109501    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: E0318 12:43:25.676706    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-642600?timeout=10s\": dial tcp 172.25.148.129:8443: connect: connection refused" interval="1.6s"
	I0318 12:44:40.109501    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: I0318 12:43:25.692553    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eca6768355c74817c50b811b96b5fcc93a181c4968c53d4d4b0d0252ff6dbd0a"
	I0318 12:44:40.109501    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: W0318 12:43:25.700976    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:40.109501    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: E0318 12:43:25.701062    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:40.109501    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: I0318 12:43:25.708111    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f62197122538f83943df8b19710794ea6ea9a9ffa884082a1a62435e9b152c3f"
	I0318 12:44:40.109501    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: W0318 12:43:25.731607    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:40.109501    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: E0318 12:43:25.731695    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:40.109501    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: W0318 12:43:25.790774    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-642600&limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:40.110034    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: E0318 12:43:25.790867    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-642600&limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:40.110034    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: I0318 12:43:25.868581    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-642600"
	I0318 12:44:40.110034    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: E0318 12:43:25.869663    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.148.129:8443: connect: connection refused" node="multinode-642600"
	I0318 12:44:40.110265    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 kubelet[1523]: E0318 12:43:26.129309    1523 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"multinode-642600.17bddc6f5820f7a9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-642600", UID:"multinode-642600", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"multinode-642600"}, FirstTimestamp:time.Date(2024, ti
me.March, 18, 12, 43, 24, 248692649, time.Local), LastTimestamp:time.Date(2024, time.March, 18, 12, 43, 24, 248692649, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"multinode-642600"}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 172.25.148.129:8443: connect: connection refused'(may retry after sleeping)
	I0318 12:44:40.110265    5712 command_runner.go:130] > Mar 18 12:43:27 multinode-642600 kubelet[1523]: I0318 12:43:27.488157    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-642600"
	I0318 12:44:40.110265    5712 command_runner.go:130] > Mar 18 12:43:30 multinode-642600 kubelet[1523]: I0318 12:43:30.626198    1523 kubelet_node_status.go:108] "Node was previously registered" node="multinode-642600"
	I0318 12:44:40.110265    5712 command_runner.go:130] > Mar 18 12:43:30 multinode-642600 kubelet[1523]: I0318 12:43:30.626989    1523 kubelet_node_status.go:73] "Successfully registered node" node="multinode-642600"
	I0318 12:44:40.110265    5712 command_runner.go:130] > Mar 18 12:43:30 multinode-642600 kubelet[1523]: I0318 12:43:30.640050    1523 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0318 12:44:40.110265    5712 command_runner.go:130] > Mar 18 12:43:30 multinode-642600 kubelet[1523]: I0318 12:43:30.642279    1523 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0318 12:44:40.110265    5712 command_runner.go:130] > Mar 18 12:43:30 multinode-642600 kubelet[1523]: I0318 12:43:30.658382    1523 setters.go:552] "Node became not ready" node="multinode-642600" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-03-18T12:43:30Z","lastTransitionTime":"2024-03-18T12:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0318 12:44:40.110411    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.223393    1523 apiserver.go:52] "Watching apiserver"
	I0318 12:44:40.110411    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.230566    1523 topology_manager.go:215] "Topology Admit Handler" podUID="acd9d7a0-0e27-4bbb-8562-6fbf374742ca" podNamespace="kube-system" podName="kindnet-kpt4f"
	I0318 12:44:40.110411    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.231421    1523 topology_manager.go:215] "Topology Admit Handler" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b" podNamespace="kube-system" podName="coredns-5dd5756b68-fgn7v"
	I0318 12:44:40.110534    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.231644    1523 topology_manager.go:215] "Topology Admit Handler" podUID="449242c2-ad12-4da5-b339-3be7ab8a9b16" podNamespace="kube-system" podName="kube-proxy-4dg79"
	I0318 12:44:40.110534    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.231779    1523 topology_manager.go:215] "Topology Admit Handler" podUID="d2718b8a-26a9-4c86-bf9a-221d1ee23ceb" podNamespace="kube-system" podName="storage-provisioner"
	I0318 12:44:40.110534    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.231939    1523 topology_manager.go:215] "Topology Admit Handler" podUID="45969c0e-ac43-459e-95c0-86f7b76947db" podNamespace="default" podName="busybox-5b5d89c9d6-48qkw"
	I0318 12:44:40.110650    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.232191    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:40.110740    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.233435    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:40.110768    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.235227    1523 kubelet.go:1872] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-642600" podUID="4aa98cb9-f6ab-40b3-8c15-235ba4e09985"
	I0318 12:44:40.110768    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.236365    1523 kubelet.go:1872] "Trying to delete pod" pod="kube-system/etcd-multinode-642600" podUID="237133d7-6f1a-42ee-8cf2-a2d7564d67fc"
	I0318 12:44:40.110768    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.266715    1523 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	I0318 12:44:40.110865    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.289094    1523 kubelet.go:1877] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-642600"
	I0318 12:44:40.110893    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.301996    1523 kubelet.go:1877] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-642600"
	I0318 12:44:40.110893    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.322408    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/449242c2-ad12-4da5-b339-3be7ab8a9b16-lib-modules\") pod \"kube-proxy-4dg79\" (UID: \"449242c2-ad12-4da5-b339-3be7ab8a9b16\") " pod="kube-system/kube-proxy-4dg79"
	I0318 12:44:40.111002    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.322793    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/acd9d7a0-0e27-4bbb-8562-6fbf374742ca-xtables-lock\") pod \"kindnet-kpt4f\" (UID: \"acd9d7a0-0e27-4bbb-8562-6fbf374742ca\") " pod="kube-system/kindnet-kpt4f"
	I0318 12:44:40.111002    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.323081    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d2718b8a-26a9-4c86-bf9a-221d1ee23ceb-tmp\") pod \"storage-provisioner\" (UID: \"d2718b8a-26a9-4c86-bf9a-221d1ee23ceb\") " pod="kube-system/storage-provisioner"
	I0318 12:44:40.111064    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.323213    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/acd9d7a0-0e27-4bbb-8562-6fbf374742ca-cni-cfg\") pod \"kindnet-kpt4f\" (UID: \"acd9d7a0-0e27-4bbb-8562-6fbf374742ca\") " pod="kube-system/kindnet-kpt4f"
	I0318 12:44:40.111118    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.323245    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/449242c2-ad12-4da5-b339-3be7ab8a9b16-xtables-lock\") pod \"kube-proxy-4dg79\" (UID: \"449242c2-ad12-4da5-b339-3be7ab8a9b16\") " pod="kube-system/kube-proxy-4dg79"
	I0318 12:44:40.111118    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.323294    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/acd9d7a0-0e27-4bbb-8562-6fbf374742ca-lib-modules\") pod \"kindnet-kpt4f\" (UID: \"acd9d7a0-0e27-4bbb-8562-6fbf374742ca\") " pod="kube-system/kindnet-kpt4f"
	I0318 12:44:40.111118    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.324469    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 12:44:40.111118    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.324580    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume podName:7bc52797-b4bd-4046-b3d5-fae9c8ccd13b nodeName:}" failed. No retries permitted until 2024-03-18 12:43:31.824540428 +0000 UTC m=+7.835780164 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume") pod "coredns-5dd5756b68-fgn7v" (UID: "7bc52797-b4bd-4046-b3d5-fae9c8ccd13b") : object "kube-system"/"coredns" not registered
	I0318 12:44:40.111118    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.339515    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:40.111118    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.339554    1523 projected.go:198] Error preparing data for projected volume kube-api-access-9g8n5 for pod default/busybox-5b5d89c9d6-48qkw: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:40.111118    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.339661    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5 podName:45969c0e-ac43-459e-95c0-86f7b76947db nodeName:}" failed. No retries permitted until 2024-03-18 12:43:31.839645304 +0000 UTC m=+7.850885040 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9g8n5" (UniqueName: "kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5") pod "busybox-5b5d89c9d6-48qkw" (UID: "45969c0e-ac43-459e-95c0-86f7b76947db") : object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:40.111118    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.384452    1523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-multinode-642600" podStartSLOduration=0.384368133 podCreationTimestamp="2024-03-18 12:43:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-18 12:43:31.360389871 +0000 UTC m=+7.371629607" watchObservedRunningTime="2024-03-18 12:43:31.384368133 +0000 UTC m=+7.395607769"
	I0318 12:44:40.111118    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.431280    1523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-642600" podStartSLOduration=0.431225058 podCreationTimestamp="2024-03-18 12:43:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-18 12:43:31.388015127 +0000 UTC m=+7.399254863" watchObservedRunningTime="2024-03-18 12:43:31.431225058 +0000 UTC m=+7.442464794"
	I0318 12:44:40.111118    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.828430    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 12:44:40.111118    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.828605    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume podName:7bc52797-b4bd-4046-b3d5-fae9c8ccd13b nodeName:}" failed. No retries permitted until 2024-03-18 12:43:32.828568222 +0000 UTC m=+8.839807858 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume") pod "coredns-5dd5756b68-fgn7v" (UID: "7bc52797-b4bd-4046-b3d5-fae9c8ccd13b") : object "kube-system"/"coredns" not registered
	I0318 12:44:40.111118    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.930285    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:40.111118    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.930420    1523 projected.go:198] Error preparing data for projected volume kube-api-access-9g8n5 for pod default/busybox-5b5d89c9d6-48qkw: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:40.111118    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.930532    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5 podName:45969c0e-ac43-459e-95c0-86f7b76947db nodeName:}" failed. No retries permitted until 2024-03-18 12:43:32.930496159 +0000 UTC m=+8.941735795 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-9g8n5" (UniqueName: "kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5") pod "busybox-5b5d89c9d6-48qkw" (UID: "45969c0e-ac43-459e-95c0-86f7b76947db") : object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:40.111118    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: I0318 12:43:32.133795    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="889c16eb0ab731956d02a28d0337dc6ff349dc574ba10d4fc1a939fb2e09d6d3"
	I0318 12:44:40.111118    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: I0318 12:43:32.147805    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a2f0ccaf5c4c6c0019124eda20c358dfa8aa20f0c92ade10aa3de32608e3527"
	I0318 12:44:40.111649    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: I0318 12:43:32.369742    1523 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d04d3e415061983b742e6c14f1a5f562" path="/var/lib/kubelet/pods/d04d3e415061983b742e6c14f1a5f562/volumes"
	I0318 12:44:40.111649    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: I0318 12:43:32.371223    1523 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ec96a596e22f5afedbd92a854d1b8bec" path="/var/lib/kubelet/pods/ec96a596e22f5afedbd92a854d1b8bec/volumes"
	I0318 12:44:40.111649    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: I0318 12:43:32.628360    1523 kubelet.go:1872] "Trying to delete pod" pod="kube-system/etcd-multinode-642600" podUID="237133d7-6f1a-42ee-8cf2-a2d7564d67fc"
	I0318 12:44:40.111754    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: I0318 12:43:32.628590    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ecbdcbdad3fa79af8ef70896ae67d65b14c47b5811078c5d6d167e0f295d1bc"
	I0318 12:44:40.111754    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: E0318 12:43:32.836390    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 12:44:40.111851    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: E0318 12:43:32.836523    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume podName:7bc52797-b4bd-4046-b3d5-fae9c8ccd13b nodeName:}" failed. No retries permitted until 2024-03-18 12:43:34.836498609 +0000 UTC m=+10.847738345 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume") pod "coredns-5dd5756b68-fgn7v" (UID: "7bc52797-b4bd-4046-b3d5-fae9c8ccd13b") : object "kube-system"/"coredns" not registered
	I0318 12:44:40.111851    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: E0318 12:43:32.937295    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:40.111851    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: E0318 12:43:32.937349    1523 projected.go:198] Error preparing data for projected volume kube-api-access-9g8n5 for pod default/busybox-5b5d89c9d6-48qkw: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:40.111851    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: E0318 12:43:32.937443    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5 podName:45969c0e-ac43-459e-95c0-86f7b76947db nodeName:}" failed. No retries permitted until 2024-03-18 12:43:34.937423048 +0000 UTC m=+10.948662684 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-9g8n5" (UniqueName: "kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5") pod "busybox-5b5d89c9d6-48qkw" (UID: "45969c0e-ac43-459e-95c0-86f7b76947db") : object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:40.112041    5712 command_runner.go:130] > Mar 18 12:43:33 multinode-642600 kubelet[1523]: E0318 12:43:33.359564    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:40.112041    5712 command_runner.go:130] > Mar 18 12:43:33 multinode-642600 kubelet[1523]: E0318 12:43:33.359732    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:40.112140    5712 command_runner.go:130] > Mar 18 12:43:34 multinode-642600 kubelet[1523]: E0318 12:43:34.409996    1523 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0318 12:44:40.112140    5712 command_runner.go:130] > Mar 18 12:43:34 multinode-642600 kubelet[1523]: E0318 12:43:34.855132    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 12:44:40.112140    5712 command_runner.go:130] > Mar 18 12:43:34 multinode-642600 kubelet[1523]: E0318 12:43:34.855288    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume podName:7bc52797-b4bd-4046-b3d5-fae9c8ccd13b nodeName:}" failed. No retries permitted until 2024-03-18 12:43:38.85526758 +0000 UTC m=+14.866507216 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume") pod "coredns-5dd5756b68-fgn7v" (UID: "7bc52797-b4bd-4046-b3d5-fae9c8ccd13b") : object "kube-system"/"coredns" not registered
	I0318 12:44:40.112248    5712 command_runner.go:130] > Mar 18 12:43:34 multinode-642600 kubelet[1523]: E0318 12:43:34.955668    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:40.112248    5712 command_runner.go:130] > Mar 18 12:43:34 multinode-642600 kubelet[1523]: E0318 12:43:34.955718    1523 projected.go:198] Error preparing data for projected volume kube-api-access-9g8n5 for pod default/busybox-5b5d89c9d6-48qkw: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:40.112335    5712 command_runner.go:130] > Mar 18 12:43:34 multinode-642600 kubelet[1523]: E0318 12:43:34.955777    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5 podName:45969c0e-ac43-459e-95c0-86f7b76947db nodeName:}" failed. No retries permitted until 2024-03-18 12:43:38.955759519 +0000 UTC m=+14.966999155 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-9g8n5" (UniqueName: "kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5") pod "busybox-5b5d89c9d6-48qkw" (UID: "45969c0e-ac43-459e-95c0-86f7b76947db") : object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:40.112335    5712 command_runner.go:130] > Mar 18 12:43:35 multinode-642600 kubelet[1523]: E0318 12:43:35.360249    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:40.112335    5712 command_runner.go:130] > Mar 18 12:43:35 multinode-642600 kubelet[1523]: E0318 12:43:35.360337    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:40.112335    5712 command_runner.go:130] > Mar 18 12:43:37 multinode-642600 kubelet[1523]: E0318 12:43:37.360005    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:40.112508    5712 command_runner.go:130] > Mar 18 12:43:37 multinode-642600 kubelet[1523]: E0318 12:43:37.360005    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:40.112508    5712 command_runner.go:130] > Mar 18 12:43:38 multinode-642600 kubelet[1523]: E0318 12:43:38.890447    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 12:44:40.112508    5712 command_runner.go:130] > Mar 18 12:43:38 multinode-642600 kubelet[1523]: E0318 12:43:38.890642    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume podName:7bc52797-b4bd-4046-b3d5-fae9c8ccd13b nodeName:}" failed. No retries permitted until 2024-03-18 12:43:46.890560586 +0000 UTC m=+22.901800222 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume") pod "coredns-5dd5756b68-fgn7v" (UID: "7bc52797-b4bd-4046-b3d5-fae9c8ccd13b") : object "kube-system"/"coredns" not registered
	I0318 12:44:40.112508    5712 command_runner.go:130] > Mar 18 12:43:38 multinode-642600 kubelet[1523]: E0318 12:43:38.991640    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:40.112780    5712 command_runner.go:130] > Mar 18 12:43:38 multinode-642600 kubelet[1523]: E0318 12:43:38.991754    1523 projected.go:198] Error preparing data for projected volume kube-api-access-9g8n5 for pod default/busybox-5b5d89c9d6-48qkw: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:40.112780    5712 command_runner.go:130] > Mar 18 12:43:38 multinode-642600 kubelet[1523]: E0318 12:43:38.991856    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5 podName:45969c0e-ac43-459e-95c0-86f7b76947db nodeName:}" failed. No retries permitted until 2024-03-18 12:43:46.991836746 +0000 UTC m=+23.003076482 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-9g8n5" (UniqueName: "kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5") pod "busybox-5b5d89c9d6-48qkw" (UID: "45969c0e-ac43-459e-95c0-86f7b76947db") : object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:40.112884    5712 command_runner.go:130] > Mar 18 12:43:39 multinode-642600 kubelet[1523]: E0318 12:43:39.360236    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:40.112884    5712 command_runner.go:130] > Mar 18 12:43:39 multinode-642600 kubelet[1523]: E0318 12:43:39.360508    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:40.112991    5712 command_runner.go:130] > Mar 18 12:43:39 multinode-642600 kubelet[1523]: E0318 12:43:39.425235    1523 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0318 12:44:40.112991    5712 command_runner.go:130] > Mar 18 12:43:41 multinode-642600 kubelet[1523]: E0318 12:43:41.360362    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:40.113078    5712 command_runner.go:130] > Mar 18 12:43:41 multinode-642600 kubelet[1523]: E0318 12:43:41.360863    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:40.113198    5712 command_runner.go:130] > Mar 18 12:43:43 multinode-642600 kubelet[1523]: E0318 12:43:43.359722    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:40.113198    5712 command_runner.go:130] > Mar 18 12:43:43 multinode-642600 kubelet[1523]: E0318 12:43:43.360308    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:40.113198    5712 command_runner.go:130] > Mar 18 12:43:44 multinode-642600 kubelet[1523]: E0318 12:43:44.438590    1523 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0318 12:44:40.113198    5712 command_runner.go:130] > Mar 18 12:43:45 multinode-642600 kubelet[1523]: E0318 12:43:45.360026    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:40.113198    5712 command_runner.go:130] > Mar 18 12:43:45 multinode-642600 kubelet[1523]: E0318 12:43:45.360101    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:40.113198    5712 command_runner.go:130] > Mar 18 12:43:46 multinode-642600 kubelet[1523]: E0318 12:43:46.970368    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 12:44:40.113198    5712 command_runner.go:130] > Mar 18 12:43:46 multinode-642600 kubelet[1523]: E0318 12:43:46.970583    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume podName:7bc52797-b4bd-4046-b3d5-fae9c8ccd13b nodeName:}" failed. No retries permitted until 2024-03-18 12:44:02.970562522 +0000 UTC m=+38.981802258 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume") pod "coredns-5dd5756b68-fgn7v" (UID: "7bc52797-b4bd-4046-b3d5-fae9c8ccd13b") : object "kube-system"/"coredns" not registered
	I0318 12:44:40.113198    5712 command_runner.go:130] > Mar 18 12:43:47 multinode-642600 kubelet[1523]: E0318 12:43:47.071352    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:40.113198    5712 command_runner.go:130] > Mar 18 12:43:47 multinode-642600 kubelet[1523]: E0318 12:43:47.071390    1523 projected.go:198] Error preparing data for projected volume kube-api-access-9g8n5 for pod default/busybox-5b5d89c9d6-48qkw: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:40.113198    5712 command_runner.go:130] > Mar 18 12:43:47 multinode-642600 kubelet[1523]: E0318 12:43:47.071448    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5 podName:45969c0e-ac43-459e-95c0-86f7b76947db nodeName:}" failed. No retries permitted until 2024-03-18 12:44:03.071430219 +0000 UTC m=+39.082669855 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-9g8n5" (UniqueName: "kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5") pod "busybox-5b5d89c9d6-48qkw" (UID: "45969c0e-ac43-459e-95c0-86f7b76947db") : object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:40.113198    5712 command_runner.go:130] > Mar 18 12:43:47 multinode-642600 kubelet[1523]: E0318 12:43:47.359847    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:40.113198    5712 command_runner.go:130] > Mar 18 12:43:47 multinode-642600 kubelet[1523]: E0318 12:43:47.360318    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:40.113774    5712 command_runner.go:130] > Mar 18 12:43:49 multinode-642600 kubelet[1523]: E0318 12:43:49.360074    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:40.113774    5712 command_runner.go:130] > Mar 18 12:43:49 multinode-642600 kubelet[1523]: E0318 12:43:49.360604    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:40.113774    5712 command_runner.go:130] > Mar 18 12:43:49 multinode-642600 kubelet[1523]: E0318 12:43:49.453099    1523 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0318 12:44:40.113969    5712 command_runner.go:130] > Mar 18 12:43:51 multinode-642600 kubelet[1523]: E0318 12:43:51.360369    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:40.114037    5712 command_runner.go:130] > Mar 18 12:43:51 multinode-642600 kubelet[1523]: E0318 12:43:51.361016    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:40.114096    5712 command_runner.go:130] > Mar 18 12:43:53 multinode-642600 kubelet[1523]: E0318 12:43:53.359799    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:40.114096    5712 command_runner.go:130] > Mar 18 12:43:53 multinode-642600 kubelet[1523]: E0318 12:43:53.359935    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:40.114096    5712 command_runner.go:130] > Mar 18 12:43:54 multinode-642600 kubelet[1523]: E0318 12:43:54.467487    1523 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0318 12:44:40.114096    5712 command_runner.go:130] > Mar 18 12:43:55 multinode-642600 kubelet[1523]: E0318 12:43:55.359513    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:40.114096    5712 command_runner.go:130] > Mar 18 12:43:55 multinode-642600 kubelet[1523]: E0318 12:43:55.360047    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:40.114096    5712 command_runner.go:130] > Mar 18 12:43:57 multinode-642600 kubelet[1523]: E0318 12:43:57.359796    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:40.114096    5712 command_runner.go:130] > Mar 18 12:43:57 multinode-642600 kubelet[1523]: E0318 12:43:57.359970    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:40.114096    5712 command_runner.go:130] > Mar 18 12:43:59 multinode-642600 kubelet[1523]: E0318 12:43:59.360327    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:40.114096    5712 command_runner.go:130] > Mar 18 12:43:59 multinode-642600 kubelet[1523]: E0318 12:43:59.360455    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:40.114096    5712 command_runner.go:130] > Mar 18 12:43:59 multinode-642600 kubelet[1523]: E0318 12:43:59.483297    1523 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0318 12:44:40.114096    5712 command_runner.go:130] > Mar 18 12:44:01 multinode-642600 kubelet[1523]: E0318 12:44:01.359691    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:40.114635    5712 command_runner.go:130] > Mar 18 12:44:01 multinode-642600 kubelet[1523]: E0318 12:44:01.360228    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:40.114804    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.032626    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 12:44:40.114896    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.032722    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume podName:7bc52797-b4bd-4046-b3d5-fae9c8ccd13b nodeName:}" failed. No retries permitted until 2024-03-18 12:44:35.0327033 +0000 UTC m=+71.043942936 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume") pod "coredns-5dd5756b68-fgn7v" (UID: "7bc52797-b4bd-4046-b3d5-fae9c8ccd13b") : object "kube-system"/"coredns" not registered
	I0318 12:44:40.114925    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.134727    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:40.114992    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.134857    1523 projected.go:198] Error preparing data for projected volume kube-api-access-9g8n5 for pod default/busybox-5b5d89c9d6-48qkw: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:40.115066    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.135073    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5 podName:45969c0e-ac43-459e-95c0-86f7b76947db nodeName:}" failed. No retries permitted until 2024-03-18 12:44:35.13505028 +0000 UTC m=+71.146289916 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-9g8n5" (UniqueName: "kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5") pod "busybox-5b5d89c9d6-48qkw" (UID: "45969c0e-ac43-459e-95c0-86f7b76947db") : object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:40.115066    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.360260    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:40.115123    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.360354    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:40.115123    5712 command_runner.go:130] > Mar 18 12:44:04 multinode-642600 kubelet[1523]: I0318 12:44:04.124509    1523 scope.go:117] "RemoveContainer" containerID="996fb0f2ade69129acd747fc5146ef4295cc7ebd79cae8e8f881a21393ddb74a"
	I0318 12:44:40.116778    5712 command_runner.go:130] > Mar 18 12:44:04 multinode-642600 kubelet[1523]: I0318 12:44:04.125880    1523 scope.go:117] "RemoveContainer" containerID="787ade2ea2cd019bbcde5523b750659f59b0187d9de4b757fba8a14f9a126460"
	I0318 12:44:40.116778    5712 command_runner.go:130] > Mar 18 12:44:04 multinode-642600 kubelet[1523]: E0318 12:44:04.127355    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d2718b8a-26a9-4c86-bf9a-221d1ee23ceb)\"" pod="kube-system/storage-provisioner" podUID="d2718b8a-26a9-4c86-bf9a-221d1ee23ceb"
	I0318 12:44:40.116778    5712 command_runner.go:130] > Mar 18 12:44:17 multinode-642600 kubelet[1523]: I0318 12:44:17.359956    1523 scope.go:117] "RemoveContainer" containerID="787ade2ea2cd019bbcde5523b750659f59b0187d9de4b757fba8a14f9a126460"
	I0318 12:44:40.116778    5712 command_runner.go:130] > Mar 18 12:44:24 multinode-642600 kubelet[1523]: I0318 12:44:24.325657    1523 scope.go:117] "RemoveContainer" containerID="301c80f8b38cb79f051755af6af0fb604c0eee0689fd1f2d22a66e0969a9583f"
	I0318 12:44:40.116778    5712 command_runner.go:130] > Mar 18 12:44:24 multinode-642600 kubelet[1523]: I0318 12:44:24.374630    1523 scope.go:117] "RemoveContainer" containerID="4b94d396876e5c7e3b8c69b01560d10ad95ff183ab3cc78a194276537cfd6cf5"
	I0318 12:44:40.116778    5712 command_runner.go:130] > Mar 18 12:44:24 multinode-642600 kubelet[1523]: E0318 12:44:24.399375    1523 iptables.go:575] "Could not set up iptables canary" err=<
	I0318 12:44:40.116778    5712 command_runner.go:130] > Mar 18 12:44:24 multinode-642600 kubelet[1523]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0318 12:44:40.116778    5712 command_runner.go:130] > Mar 18 12:44:24 multinode-642600 kubelet[1523]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0318 12:44:40.116778    5712 command_runner.go:130] > Mar 18 12:44:24 multinode-642600 kubelet[1523]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0318 12:44:40.116778    5712 command_runner.go:130] > Mar 18 12:44:24 multinode-642600 kubelet[1523]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0318 12:44:40.116778    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 kubelet[1523]: I0318 12:44:35.962288    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1b2432b0ed66a1175586c13232eb9b9239f18a4f9a86e2a0c5f48c1407fdb14"
	I0318 12:44:40.117318    5712 command_runner.go:130] > Mar 18 12:44:36 multinode-642600 kubelet[1523]: I0318 12:44:36.079817    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1090dd57409807a15613607fd810b67863a9dd9c5a8512d7a6720906641c7f26"
	I0318 12:44:40.161036    5712 logs.go:123] Gathering logs for etcd [8e7911b58c58] ...
	I0318 12:44:40.161036    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7911b58c58"
	I0318 12:44:40.194913    5712 command_runner.go:130] ! {"level":"warn","ts":"2024-03-18T12:43:26.200481Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0318 12:44:40.195613    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.210029Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.25.148.129:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.25.148.129:2380","--initial-cluster=multinode-642600=https://172.25.148.129:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.25.148.129:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.25.148.129:2380","--name=multinode-642600","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0318 12:44:40.195613    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.210181Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0318 12:44:40.195613    5712 command_runner.go:130] ! {"level":"warn","ts":"2024-03-18T12:43:26.21031Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0318 12:44:40.195613    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.210331Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.25.148.129:2380"]}
	I0318 12:44:40.195870    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.210546Z","caller":"embed/etcd.go:495","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0318 12:44:40.195870    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.222773Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.25.148.129:2379"]}
	I0318 12:44:40.195985    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.228178Z","caller":"embed/etcd.go:309","msg":"starting an etcd server","etcd-version":"3.5.9","git-sha":"bdbbde998","go-version":"go1.19.9","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-642600","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.25.148.129:2380"],"listen-peer-urls":["https://172.25.148.129:2380"],"advertise-client-urls":["https://172.25.148.129:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.25.148.129:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"init
ial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0318 12:44:40.195985    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.271498Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"41.739133ms"}
	I0318 12:44:40.196072    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.299465Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0318 12:44:40.196072    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.319578Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"31713adf8492fbc4","local-member-id":"78764271becab2d0","commit-index":2138}
	I0318 12:44:40.196165    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.319995Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 switched to configuration voters=()"}
	I0318 12:44:40.196165    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.320107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 became follower at term 2"}
	I0318 12:44:40.196210    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.320138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 78764271becab2d0 [peers: [], term: 2, commit: 2138, applied: 0, lastindex: 2138, lastterm: 2]"}
	I0318 12:44:40.196210    5712 command_runner.go:130] ! {"level":"warn","ts":"2024-03-18T12:43:26.325366Z","caller":"auth/store.go:1238","msg":"simple token is not cryptographically signed"}
	I0318 12:44:40.196210    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.329191Z","caller":"mvcc/kvstore.go:323","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1388}
	I0318 12:44:40.196294    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.333388Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":1848}
	I0318 12:44:40.196294    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.357951Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0318 12:44:40.196359    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.372436Z","caller":"etcdserver/corrupt.go:95","msg":"starting initial corruption check","local-member-id":"78764271becab2d0","timeout":"7s"}
	I0318 12:44:40.196359    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.373126Z","caller":"etcdserver/corrupt.go:165","msg":"initial corruption checking passed; no corruption","local-member-id":"78764271becab2d0"}
	I0318 12:44:40.196421    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.373252Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"78764271becab2d0","local-server-version":"3.5.9","cluster-version":"to_be_decided"}
	I0318 12:44:40.196421    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.373688Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	I0318 12:44:40.196421    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.375391Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0318 12:44:40.196485    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.375647Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0318 12:44:40.196540    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.375735Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0318 12:44:40.196540    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.377469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 switched to configuration voters=(8680198388102902480)"}
	I0318 12:44:40.196577    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.377568Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"31713adf8492fbc4","local-member-id":"78764271becab2d0","added-peer-id":"78764271becab2d0","added-peer-peer-urls":["https://172.25.151.112:2380"]}
	I0318 12:44:40.196577    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.378749Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"31713adf8492fbc4","local-member-id":"78764271becab2d0","cluster-version":"3.5"}
	I0318 12:44:40.196658    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.378942Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0318 12:44:40.196719    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.380244Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0318 12:44:40.196747    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.380886Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"78764271becab2d0","initial-advertise-peer-urls":["https://172.25.148.129:2380"],"listen-peer-urls":["https://172.25.148.129:2380"],"advertise-client-urls":["https://172.25.148.129:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.25.148.129:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0318 12:44:40.196747    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.383141Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.25.148.129:2380"}
	I0318 12:44:40.196747    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.383279Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.25.148.129:2380"}
	I0318 12:44:40.196747    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.393018Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0318 12:44:40.196747    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.621966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 is starting a new election at term 2"}
	I0318 12:44:40.196747    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.622399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 became pre-candidate at term 2"}
	I0318 12:44:40.196747    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.622624Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 received MsgPreVoteResp from 78764271becab2d0 at term 2"}
	I0318 12:44:40.196747    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.622825Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 became candidate at term 3"}
	I0318 12:44:40.196747    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.624231Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 received MsgVoteResp from 78764271becab2d0 at term 3"}
	I0318 12:44:40.196747    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.624426Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 became leader at term 3"}
	I0318 12:44:40.196747    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.624696Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 78764271becab2d0 elected leader 78764271becab2d0 at term 3"}
	I0318 12:44:40.196747    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.641347Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"78764271becab2d0","local-member-attributes":"{Name:multinode-642600 ClientURLs:[https://172.25.148.129:2379]}","request-path":"/0/members/78764271becab2d0/attributes","cluster-id":"31713adf8492fbc4","publish-timeout":"7s"}
	I0318 12:44:40.196747    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.641882Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0318 12:44:40.196747    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.64409Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0318 12:44:40.196747    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.644373Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0318 12:44:40.196747    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.641995Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0318 12:44:40.196747    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.650212Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.25.148.129:2379"}
	I0318 12:44:40.196747    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.651053Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0318 12:44:40.204567    5712 logs.go:123] Gathering logs for coredns [fcf17db92b35] ...
	I0318 12:44:40.204567    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcf17db92b35"
	I0318 12:44:40.232995    5712 command_runner.go:130] > .:53
	I0318 12:44:40.233913    5712 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 07d6393480c36cc6b464d3853a5e32028517fcba50e93adef34ce624ca099b3a1e269a86e99bf5086a15610de9e11b2980c233f8d3dcbff38f702488f0fd5328
	I0318 12:44:40.233913    5712 command_runner.go:130] > CoreDNS-1.10.1
	I0318 12:44:40.233913    5712 command_runner.go:130] > linux/amd64, go1.20, 055b2c3
	I0318 12:44:40.233913    5712 command_runner.go:130] > [INFO] 127.0.0.1:53681 - 55845 "HINFO IN 162544917519141994.8165783507281513505. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.028223444s
	I0318 12:44:40.234132    5712 logs.go:123] Gathering logs for coredns [e81f1d2fdb36] ...
	I0318 12:44:40.234279    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e81f1d2fdb36"
	I0318 12:44:40.266860    5712 command_runner.go:130] > .:53
	I0318 12:44:40.266981    5712 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 07d6393480c36cc6b464d3853a5e32028517fcba50e93adef34ce624ca099b3a1e269a86e99bf5086a15610de9e11b2980c233f8d3dcbff38f702488f0fd5328
	I0318 12:44:40.266981    5712 command_runner.go:130] > CoreDNS-1.10.1
	I0318 12:44:40.266981    5712 command_runner.go:130] > linux/amd64, go1.20, 055b2c3
	I0318 12:44:40.266981    5712 command_runner.go:130] > [INFO] 127.0.0.1:48183 - 41539 "HINFO IN 767578685007701398.8900982300391989616. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.040167772s
	I0318 12:44:40.266981    5712 command_runner.go:130] > [INFO] 10.244.0.3:56190 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000320901s
	I0318 12:44:40.266981    5712 command_runner.go:130] > [INFO] 10.244.0.3:43050 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.04023503s
	I0318 12:44:40.267134    5712 command_runner.go:130] > [INFO] 10.244.0.3:47302 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.158419612s
	I0318 12:44:40.267191    5712 command_runner.go:130] > [INFO] 10.244.0.3:37199 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.162590352s
	I0318 12:44:40.267191    5712 command_runner.go:130] > [INFO] 10.244.1.2:48003 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000216101s
	I0318 12:44:40.267191    5712 command_runner.go:130] > [INFO] 10.244.1.2:48857 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000380201s
	I0318 12:44:40.267296    5712 command_runner.go:130] > [INFO] 10.244.1.2:52412 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000070401s
	I0318 12:44:40.267296    5712 command_runner.go:130] > [INFO] 10.244.1.2:59362 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000071801s
	I0318 12:44:40.267296    5712 command_runner.go:130] > [INFO] 10.244.0.3:38833 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000250501s
	I0318 12:44:40.267296    5712 command_runner.go:130] > [INFO] 10.244.0.3:34860 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.064163607s
	I0318 12:44:40.267296    5712 command_runner.go:130] > [INFO] 10.244.0.3:45210 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000227601s
	I0318 12:44:40.267296    5712 command_runner.go:130] > [INFO] 10.244.0.3:32804 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001229s
	I0318 12:44:40.267376    5712 command_runner.go:130] > [INFO] 10.244.0.3:44904 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.01563145s
	I0318 12:44:40.267376    5712 command_runner.go:130] > [INFO] 10.244.0.3:34958 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002035s
	I0318 12:44:40.267376    5712 command_runner.go:130] > [INFO] 10.244.0.3:59094 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001507s
	I0318 12:44:40.267376    5712 command_runner.go:130] > [INFO] 10.244.0.3:39370 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000181001s
	I0318 12:44:40.267376    5712 command_runner.go:130] > [INFO] 10.244.1.2:40318 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000302101s
	I0318 12:44:40.267456    5712 command_runner.go:130] > [INFO] 10.244.1.2:43523 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001489s
	I0318 12:44:40.267456    5712 command_runner.go:130] > [INFO] 10.244.1.2:47882 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001346s
	I0318 12:44:40.267456    5712 command_runner.go:130] > [INFO] 10.244.1.2:38222 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000057401s
	I0318 12:44:40.267456    5712 command_runner.go:130] > [INFO] 10.244.1.2:49068 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001253s
	I0318 12:44:40.267553    5712 command_runner.go:130] > [INFO] 10.244.1.2:35375 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000582s
	I0318 12:44:40.267553    5712 command_runner.go:130] > [INFO] 10.244.1.2:40933 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000179201s
	I0318 12:44:40.267553    5712 command_runner.go:130] > [INFO] 10.244.1.2:36014 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0002051s
	I0318 12:44:40.267553    5712 command_runner.go:130] > [INFO] 10.244.0.3:37733 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000265401s
	I0318 12:44:40.267553    5712 command_runner.go:130] > [INFO] 10.244.0.3:52912 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148001s
	I0318 12:44:40.267553    5712 command_runner.go:130] > [INFO] 10.244.0.3:33147 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000143701s
	I0318 12:44:40.267553    5712 command_runner.go:130] > [INFO] 10.244.0.3:49893 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000536s
	I0318 12:44:40.267634    5712 command_runner.go:130] > [INFO] 10.244.1.2:42681 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001221s
	I0318 12:44:40.267634    5712 command_runner.go:130] > [INFO] 10.244.1.2:41416 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143s
	I0318 12:44:40.267634    5712 command_runner.go:130] > [INFO] 10.244.1.2:58254 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000241501s
	I0318 12:44:40.267634    5712 command_runner.go:130] > [INFO] 10.244.1.2:35844 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000197201s
	I0318 12:44:40.267727    5712 command_runner.go:130] > [INFO] 10.244.0.3:33559 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102201s
	I0318 12:44:40.267727    5712 command_runner.go:130] > [INFO] 10.244.0.3:53963 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000158701s
	I0318 12:44:40.267727    5712 command_runner.go:130] > [INFO] 10.244.0.3:41406 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001297s
	I0318 12:44:40.267727    5712 command_runner.go:130] > [INFO] 10.244.0.3:34685 - 5 "PTR IN 1.144.25.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000264001s
	I0318 12:44:40.267727    5712 command_runner.go:130] > [INFO] 10.244.1.2:43312 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001178s
	I0318 12:44:40.267821    5712 command_runner.go:130] > [INFO] 10.244.1.2:55281 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000235501s
	I0318 12:44:40.267821    5712 command_runner.go:130] > [INFO] 10.244.1.2:34710 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000874s
	I0318 12:44:40.267821    5712 command_runner.go:130] > [INFO] 10.244.1.2:57686 - 5 "PTR IN 1.144.25.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000557s
	I0318 12:44:40.267821    5712 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0318 12:44:40.267821    5712 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0318 12:44:42.783841    5712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 12:44:42.814476    5712 command_runner.go:130] > 1997
	I0318 12:44:42.814581    5712 api_server.go:72] duration metric: took 1m6.579646s to wait for apiserver process to appear ...
	I0318 12:44:42.814644    5712 api_server.go:88] waiting for apiserver healthz status ...
	I0318 12:44:42.824389    5712 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 12:44:42.857700    5712 command_runner.go:130] > a48a6d310b86
	I0318 12:44:42.858239    5712 logs.go:276] 1 containers: [a48a6d310b86]
	I0318 12:44:42.866946    5712 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 12:44:42.898126    5712 command_runner.go:130] > 8e7911b58c58
	I0318 12:44:42.898126    5712 logs.go:276] 1 containers: [8e7911b58c58]
	I0318 12:44:42.910537    5712 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 12:44:42.939848    5712 command_runner.go:130] > fcf17db92b35
	I0318 12:44:42.939848    5712 command_runner.go:130] > e81f1d2fdb36
	I0318 12:44:42.939848    5712 logs.go:276] 2 containers: [fcf17db92b35 e81f1d2fdb36]
	I0318 12:44:42.949341    5712 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 12:44:42.975784    5712 command_runner.go:130] > bd1e4f4d262e
	I0318 12:44:42.975784    5712 command_runner.go:130] > 47777d4c0b90
	I0318 12:44:42.975784    5712 logs.go:276] 2 containers: [bd1e4f4d262e 47777d4c0b90]
	I0318 12:44:42.984716    5712 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 12:44:43.009854    5712 command_runner.go:130] > 575b41a3a85a
	I0318 12:44:43.009854    5712 command_runner.go:130] > 4bbad08fe59a
	I0318 12:44:43.009854    5712 logs.go:276] 2 containers: [575b41a3a85a 4bbad08fe59a]
	I0318 12:44:43.018844    5712 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 12:44:43.054318    5712 command_runner.go:130] > 14ae9398d33b
	I0318 12:44:43.054318    5712 command_runner.go:130] > a54be4436901
	I0318 12:44:43.054318    5712 logs.go:276] 2 containers: [14ae9398d33b a54be4436901]
	I0318 12:44:43.066428    5712 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 12:44:43.093681    5712 command_runner.go:130] > 9fec05a61d2a
	I0318 12:44:43.093681    5712 command_runner.go:130] > 5cf42651cb21
	I0318 12:44:43.093681    5712 logs.go:276] 2 containers: [9fec05a61d2a 5cf42651cb21]
	I0318 12:44:43.093681    5712 logs.go:123] Gathering logs for kubelet ...
	I0318 12:44:43.093681    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 12:44:43.124226    5712 command_runner.go:130] > Mar 18 12:43:20 multinode-642600 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0318 12:44:43.124660    5712 command_runner.go:130] > Mar 18 12:43:20 multinode-642600 kubelet[1388]: I0318 12:43:20.841405    1388 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0318 12:44:43.124660    5712 command_runner.go:130] > Mar 18 12:43:20 multinode-642600 kubelet[1388]: I0318 12:43:20.841736    1388 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:43.124660    5712 command_runner.go:130] > Mar 18 12:43:20 multinode-642600 kubelet[1388]: I0318 12:43:20.842325    1388 server.go:895] "Client rotation is on, will bootstrap in background"
	I0318 12:44:43.124760    5712 command_runner.go:130] > Mar 18 12:43:20 multinode-642600 kubelet[1388]: E0318 12:43:20.842583    1388 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0318 12:44:43.124760    5712 command_runner.go:130] > Mar 18 12:43:20 multinode-642600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0318 12:44:43.124760    5712 command_runner.go:130] > Mar 18 12:43:20 multinode-642600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0318 12:44:43.124760    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0318 12:44:43.124829    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0318 12:44:43.124829    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0318 12:44:43.124829    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 kubelet[1445]: I0318 12:43:21.629315    1445 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0318 12:44:43.124829    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 kubelet[1445]: I0318 12:43:21.629808    1445 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:43.124890    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 kubelet[1445]: I0318 12:43:21.631096    1445 server.go:895] "Client rotation is on, will bootstrap in background"
	I0318 12:44:43.124890    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 kubelet[1445]: E0318 12:43:21.631229    1445 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0318 12:44:43.124890    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0318 12:44:43.124890    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0318 12:44:43.124890    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0318 12:44:43.124890    5712 command_runner.go:130] > Mar 18 12:43:23 multinode-642600 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0318 12:44:43.124890    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.100950    1523 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0318 12:44:43.124890    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.101311    1523 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:43.124890    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.101646    1523 server.go:895] "Client rotation is on, will bootstrap in background"
	I0318 12:44:43.124890    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.108175    1523 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0318 12:44:43.124890    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.123413    1523 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 12:44:43.124890    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.204504    1523 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0318 12:44:43.124890    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.205069    1523 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.205344    1523 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","To
pologyManagerPolicyOptions":null}
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.205667    1523 topology_manager.go:138] "Creating topology manager with none policy"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.205685    1523 container_manager_linux.go:301] "Creating device plugin manager"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.206240    1523 state_mem.go:36] "Initialized new in-memory state store"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.208674    1523 kubelet.go:393] "Attempting to sync node with API server"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.208817    1523 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.209351    1523 kubelet.go:309] "Adding apiserver pod source"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.209491    1523 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: W0318 12:43:24.212857    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-642600&limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.213311    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-642600&limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: W0318 12:43:24.219866    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.220057    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.240215    1523 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="docker" version="25.0.4" apiVersion="v1"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: W0318 12:43:24.245761    1523 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.248742    1523 server.go:1232] "Started kubelet"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.249814    1523 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.251561    1523 server.go:462] "Adding debug handlers to kubelet server"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.254285    1523 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.255480    1523 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.255659    1523 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"multinode-642600.17bddc6f5820f7a9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-642600", UID:"multinode-642600", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"multinode-642600"}, FirstTimestamp:time.Date(2024, ti
me.March, 18, 12, 43, 24, 248692649, time.Local), LastTimestamp:time.Date(2024, time.March, 18, 12, 43, 24, 248692649, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"multinode-642600"}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 172.25.148.129:8443: connect: connection refused'(may retry after sleeping)
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.259469    1523 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.261490    1523 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.265275    1523 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.270368    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-642600?timeout=10s\": dial tcp 172.25.148.129:8443: connect: connection refused" interval="200ms"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: W0318 12:43:24.275611    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.275814    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.317069    1523 reconciler_new.go:29] "Reconciler: start to sync state"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.327943    1523 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.327963    1523 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.327985    1523 state_mem.go:36] "Initialized new in-memory state store"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.329007    1523 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.329047    1523 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.329057    1523 policy_none.go:49] "None policy: Start"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.336597    1523 memory_manager.go:169] "Starting memorymanager" policy="None"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.336631    1523 state_mem.go:35] "Initializing new in-memory state store"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.337548    1523 state_mem.go:75] "Updated machine memory state"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.345495    1523 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.348154    1523 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.351399    1523 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.355603    1523 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.356232    1523 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.357037    1523 kubelet.go:2303] "Starting kubelet main sync loop"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.359069    1523 kubelet.go:2327] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: W0318 12:43:24.367050    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.367230    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.387242    1523 iptables.go:575] "Could not set up iptables canary" err=<
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.387428    1523 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-642600\" not found"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.399151    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-642600"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.399841    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.148.129:8443: connect: connection refused" node="multinode-642600"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.460339    1523 topology_manager.go:215] "Topology Admit Handler" podUID="d5f09afee1a6ef36657c1ae3335ddda6" podNamespace="kube-system" podName="etcd-multinode-642600"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.472389    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-642600?timeout=10s\": dial tcp 172.25.148.129:8443: connect: connection refused" interval="400ms"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.474475    1523 topology_manager.go:215] "Topology Admit Handler" podUID="624de65f019baf96d4a0e2fb6064e413" podNamespace="kube-system" podName="kube-apiserver-multinode-642600"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.487469    1523 topology_manager.go:215] "Topology Admit Handler" podUID="a1608bc774d0b3e96e1b6fbbded5cb52" podNamespace="kube-system" podName="kube-controller-manager-multinode-642600"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.500311    1523 topology_manager.go:215] "Topology Admit Handler" podUID="cf50844b540be8ed0b3e767db413ac8f" podNamespace="kube-system" podName="kube-scheduler-multinode-642600"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.527553    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/d5f09afee1a6ef36657c1ae3335ddda6-etcd-certs\") pod \"etcd-multinode-642600\" (UID: \"d5f09afee1a6ef36657c1ae3335ddda6\") " pod="kube-system/etcd-multinode-642600"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.527604    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/d5f09afee1a6ef36657c1ae3335ddda6-etcd-data\") pod \"etcd-multinode-642600\" (UID: \"d5f09afee1a6ef36657c1ae3335ddda6\") " pod="kube-system/etcd-multinode-642600"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.534726    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed38da653fbefea9aeb0ebdb91f985394a7a792571704a4875018f5a6bc9abda"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.534857    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d766c4514f0bf79902b72d04d9e3a09fc2bcf5ef330f41cd3e84e63f5151f2b6"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.534873    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f100b1062a56929e04e6e4377055b065d93a28c504f060cce4695165a2c33db0"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.534885    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a9b4c05a5ccd5364b8dac2797803c98520c4f98df0fba77af7521af64a15152"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.534943    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f4709a3a45a45f0c67f457df8bb202ea2867cfedeaec4a164509190df13f510"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.534961    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3500a9f1ca84ed3d58cdd473a0c7c47a59643858c05dfd90247a09b1b43302bd"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.552869    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aad98ae0cd7c7708c7e02f0b23fc33f1ca2b404bd7fec324c21beefcbe17d009"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.571969    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29bb4d534c2e2b00dfe907d4443637851e3c3348e31bf00939cd6efad71c4e2e"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.589127    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fef37141be6db2ba71fd0f1d2feee00d6ab5d31d607323e4f5ffab4a3e70cfa5"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.614112    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-642600"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.616006    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.148.129:8443: connect: connection refused" node="multinode-642600"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629143    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a1608bc774d0b3e96e1b6fbbded5cb52-flexvolume-dir\") pod \"kube-controller-manager-multinode-642600\" (UID: \"a1608bc774d0b3e96e1b6fbbded5cb52\") " pod="kube-system/kube-controller-manager-multinode-642600"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629404    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a1608bc774d0b3e96e1b6fbbded5cb52-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-642600\" (UID: \"a1608bc774d0b3e96e1b6fbbded5cb52\") " pod="kube-system/kube-controller-manager-multinode-642600"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629689    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/624de65f019baf96d4a0e2fb6064e413-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-642600\" (UID: \"624de65f019baf96d4a0e2fb6064e413\") " pod="kube-system/kube-apiserver-multinode-642600"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629754    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a1608bc774d0b3e96e1b6fbbded5cb52-ca-certs\") pod \"kube-controller-manager-multinode-642600\" (UID: \"a1608bc774d0b3e96e1b6fbbded5cb52\") " pod="kube-system/kube-controller-manager-multinode-642600"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629780    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a1608bc774d0b3e96e1b6fbbded5cb52-k8s-certs\") pod \"kube-controller-manager-multinode-642600\" (UID: \"a1608bc774d0b3e96e1b6fbbded5cb52\") " pod="kube-system/kube-controller-manager-multinode-642600"
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629802    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1608bc774d0b3e96e1b6fbbded5cb52-kubeconfig\") pod \"kube-controller-manager-multinode-642600\" (UID: \"a1608bc774d0b3e96e1b6fbbded5cb52\") " pod="kube-system/kube-controller-manager-multinode-642600"
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629825    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cf50844b540be8ed0b3e767db413ac8f-kubeconfig\") pod \"kube-scheduler-multinode-642600\" (UID: \"cf50844b540be8ed0b3e767db413ac8f\") " pod="kube-system/kube-scheduler-multinode-642600"
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629860    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/624de65f019baf96d4a0e2fb6064e413-ca-certs\") pod \"kube-apiserver-multinode-642600\" (UID: \"624de65f019baf96d4a0e2fb6064e413\") " pod="kube-system/kube-apiserver-multinode-642600"
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629919    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/624de65f019baf96d4a0e2fb6064e413-k8s-certs\") pod \"kube-apiserver-multinode-642600\" (UID: \"624de65f019baf96d4a0e2fb6064e413\") " pod="kube-system/kube-apiserver-multinode-642600"
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.875125    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-642600?timeout=10s\": dial tcp 172.25.148.129:8443: connect: connection refused" interval="800ms"
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: I0318 12:43:25.030740    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-642600"
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: E0318 12:43:25.031776    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.148.129:8443: connect: connection refused" node="multinode-642600"
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: W0318 12:43:25.266849    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: E0318 12:43:25.266980    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: I0318 12:43:25.674768    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7281d6e698ea2dc42d7d3093ccde32b770bf8367fdb58230694380f40daeb9f"
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: E0318 12:43:25.676706    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-642600?timeout=10s\": dial tcp 172.25.148.129:8443: connect: connection refused" interval="1.6s"
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: I0318 12:43:25.692553    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eca6768355c74817c50b811b96b5fcc93a181c4968c53d4d4b0d0252ff6dbd0a"
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: W0318 12:43:25.700976    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: E0318 12:43:25.701062    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: I0318 12:43:25.708111    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f62197122538f83943df8b19710794ea6ea9a9ffa884082a1a62435e9b152c3f"
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: W0318 12:43:25.731607    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: E0318 12:43:25.731695    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: W0318 12:43:25.790774    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-642600&limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: E0318 12:43:25.790867    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-642600&limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: I0318 12:43:25.868581    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-642600"
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: E0318 12:43:25.869663    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.148.129:8443: connect: connection refused" node="multinode-642600"
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 kubelet[1523]: E0318 12:43:26.129309    1523 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"multinode-642600.17bddc6f5820f7a9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-642600", UID:"multinode-642600", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"multinode-642600"}, FirstTimestamp:time.Date(2024, ti
me.March, 18, 12, 43, 24, 248692649, time.Local), LastTimestamp:time.Date(2024, time.March, 18, 12, 43, 24, 248692649, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"multinode-642600"}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 172.25.148.129:8443: connect: connection refused'(may retry after sleeping)
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:27 multinode-642600 kubelet[1523]: I0318 12:43:27.488157    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-642600"
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:30 multinode-642600 kubelet[1523]: I0318 12:43:30.626198    1523 kubelet_node_status.go:108] "Node was previously registered" node="multinode-642600"
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:30 multinode-642600 kubelet[1523]: I0318 12:43:30.626989    1523 kubelet_node_status.go:73] "Successfully registered node" node="multinode-642600"
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:30 multinode-642600 kubelet[1523]: I0318 12:43:30.640050    1523 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:30 multinode-642600 kubelet[1523]: I0318 12:43:30.642279    1523 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:30 multinode-642600 kubelet[1523]: I0318 12:43:30.658382    1523 setters.go:552] "Node became not ready" node="multinode-642600" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-03-18T12:43:30Z","lastTransitionTime":"2024-03-18T12:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.223393    1523 apiserver.go:52] "Watching apiserver"
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.230566    1523 topology_manager.go:215] "Topology Admit Handler" podUID="acd9d7a0-0e27-4bbb-8562-6fbf374742ca" podNamespace="kube-system" podName="kindnet-kpt4f"
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.231421    1523 topology_manager.go:215] "Topology Admit Handler" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b" podNamespace="kube-system" podName="coredns-5dd5756b68-fgn7v"
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.231644    1523 topology_manager.go:215] "Topology Admit Handler" podUID="449242c2-ad12-4da5-b339-3be7ab8a9b16" podNamespace="kube-system" podName="kube-proxy-4dg79"
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.231779    1523 topology_manager.go:215] "Topology Admit Handler" podUID="d2718b8a-26a9-4c86-bf9a-221d1ee23ceb" podNamespace="kube-system" podName="storage-provisioner"
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.231939    1523 topology_manager.go:215] "Topology Admit Handler" podUID="45969c0e-ac43-459e-95c0-86f7b76947db" podNamespace="default" podName="busybox-5b5d89c9d6-48qkw"
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.232191    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.233435    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.235227    1523 kubelet.go:1872] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-642600" podUID="4aa98cb9-f6ab-40b3-8c15-235ba4e09985"
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.236365    1523 kubelet.go:1872] "Trying to delete pod" pod="kube-system/etcd-multinode-642600" podUID="237133d7-6f1a-42ee-8cf2-a2d7564d67fc"
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.266715    1523 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.289094    1523 kubelet.go:1877] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-642600"
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.301996    1523 kubelet.go:1877] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-642600"
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.322408    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/449242c2-ad12-4da5-b339-3be7ab8a9b16-lib-modules\") pod \"kube-proxy-4dg79\" (UID: \"449242c2-ad12-4da5-b339-3be7ab8a9b16\") " pod="kube-system/kube-proxy-4dg79"
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.322793    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/acd9d7a0-0e27-4bbb-8562-6fbf374742ca-xtables-lock\") pod \"kindnet-kpt4f\" (UID: \"acd9d7a0-0e27-4bbb-8562-6fbf374742ca\") " pod="kube-system/kindnet-kpt4f"
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.323081    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d2718b8a-26a9-4c86-bf9a-221d1ee23ceb-tmp\") pod \"storage-provisioner\" (UID: \"d2718b8a-26a9-4c86-bf9a-221d1ee23ceb\") " pod="kube-system/storage-provisioner"
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.323213    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/acd9d7a0-0e27-4bbb-8562-6fbf374742ca-cni-cfg\") pod \"kindnet-kpt4f\" (UID: \"acd9d7a0-0e27-4bbb-8562-6fbf374742ca\") " pod="kube-system/kindnet-kpt4f"
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.323245    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/449242c2-ad12-4da5-b339-3be7ab8a9b16-xtables-lock\") pod \"kube-proxy-4dg79\" (UID: \"449242c2-ad12-4da5-b339-3be7ab8a9b16\") " pod="kube-system/kube-proxy-4dg79"
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.323294    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/acd9d7a0-0e27-4bbb-8562-6fbf374742ca-lib-modules\") pod \"kindnet-kpt4f\" (UID: \"acd9d7a0-0e27-4bbb-8562-6fbf374742ca\") " pod="kube-system/kindnet-kpt4f"
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.324469    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.324580    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume podName:7bc52797-b4bd-4046-b3d5-fae9c8ccd13b nodeName:}" failed. No retries permitted until 2024-03-18 12:43:31.824540428 +0000 UTC m=+7.835780164 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume") pod "coredns-5dd5756b68-fgn7v" (UID: "7bc52797-b4bd-4046-b3d5-fae9c8ccd13b") : object "kube-system"/"coredns" not registered
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.339515    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.339554    1523 projected.go:198] Error preparing data for projected volume kube-api-access-9g8n5 for pod default/busybox-5b5d89c9d6-48qkw: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.339661    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5 podName:45969c0e-ac43-459e-95c0-86f7b76947db nodeName:}" failed. No retries permitted until 2024-03-18 12:43:31.839645304 +0000 UTC m=+7.850885040 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9g8n5" (UniqueName: "kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5") pod "busybox-5b5d89c9d6-48qkw" (UID: "45969c0e-ac43-459e-95c0-86f7b76947db") : object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.384452    1523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-multinode-642600" podStartSLOduration=0.384368133 podCreationTimestamp="2024-03-18 12:43:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-18 12:43:31.360389871 +0000 UTC m=+7.371629607" watchObservedRunningTime="2024-03-18 12:43:31.384368133 +0000 UTC m=+7.395607769"
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.431280    1523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-642600" podStartSLOduration=0.431225058 podCreationTimestamp="2024-03-18 12:43:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-18 12:43:31.388015127 +0000 UTC m=+7.399254863" watchObservedRunningTime="2024-03-18 12:43:31.431225058 +0000 UTC m=+7.442464794"
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.828430    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.828605    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume podName:7bc52797-b4bd-4046-b3d5-fae9c8ccd13b nodeName:}" failed. No retries permitted until 2024-03-18 12:43:32.828568222 +0000 UTC m=+8.839807858 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume") pod "coredns-5dd5756b68-fgn7v" (UID: "7bc52797-b4bd-4046-b3d5-fae9c8ccd13b") : object "kube-system"/"coredns" not registered
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.930285    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.930420    1523 projected.go:198] Error preparing data for projected volume kube-api-access-9g8n5 for pod default/busybox-5b5d89c9d6-48qkw: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.930532    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5 podName:45969c0e-ac43-459e-95c0-86f7b76947db nodeName:}" failed. No retries permitted until 2024-03-18 12:43:32.930496159 +0000 UTC m=+8.941735795 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-9g8n5" (UniqueName: "kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5") pod "busybox-5b5d89c9d6-48qkw" (UID: "45969c0e-ac43-459e-95c0-86f7b76947db") : object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: I0318 12:43:32.133795    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="889c16eb0ab731956d02a28d0337dc6ff349dc574ba10d4fc1a939fb2e09d6d3"
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: I0318 12:43:32.147805    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a2f0ccaf5c4c6c0019124eda20c358dfa8aa20f0c92ade10aa3de32608e3527"
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: I0318 12:43:32.369742    1523 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d04d3e415061983b742e6c14f1a5f562" path="/var/lib/kubelet/pods/d04d3e415061983b742e6c14f1a5f562/volumes"
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: I0318 12:43:32.371223    1523 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ec96a596e22f5afedbd92a854d1b8bec" path="/var/lib/kubelet/pods/ec96a596e22f5afedbd92a854d1b8bec/volumes"
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: I0318 12:43:32.628360    1523 kubelet.go:1872] "Trying to delete pod" pod="kube-system/etcd-multinode-642600" podUID="237133d7-6f1a-42ee-8cf2-a2d7564d67fc"
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: I0318 12:43:32.628590    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ecbdcbdad3fa79af8ef70896ae67d65b14c47b5811078c5d6d167e0f295d1bc"
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: E0318 12:43:32.836390    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: E0318 12:43:32.836523    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume podName:7bc52797-b4bd-4046-b3d5-fae9c8ccd13b nodeName:}" failed. No retries permitted until 2024-03-18 12:43:34.836498609 +0000 UTC m=+10.847738345 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume") pod "coredns-5dd5756b68-fgn7v" (UID: "7bc52797-b4bd-4046-b3d5-fae9c8ccd13b") : object "kube-system"/"coredns" not registered
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: E0318 12:43:32.937295    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: E0318 12:43:32.937349    1523 projected.go:198] Error preparing data for projected volume kube-api-access-9g8n5 for pod default/busybox-5b5d89c9d6-48qkw: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: E0318 12:43:32.937443    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5 podName:45969c0e-ac43-459e-95c0-86f7b76947db nodeName:}" failed. No retries permitted until 2024-03-18 12:43:34.937423048 +0000 UTC m=+10.948662684 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-9g8n5" (UniqueName: "kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5") pod "busybox-5b5d89c9d6-48qkw" (UID: "45969c0e-ac43-459e-95c0-86f7b76947db") : object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:33 multinode-642600 kubelet[1523]: E0318 12:43:33.359564    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:33 multinode-642600 kubelet[1523]: E0318 12:43:33.359732    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:34 multinode-642600 kubelet[1523]: E0318 12:43:34.409996    1523 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:34 multinode-642600 kubelet[1523]: E0318 12:43:34.855132    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:34 multinode-642600 kubelet[1523]: E0318 12:43:34.855288    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume podName:7bc52797-b4bd-4046-b3d5-fae9c8ccd13b nodeName:}" failed. No retries permitted until 2024-03-18 12:43:38.85526758 +0000 UTC m=+14.866507216 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume") pod "coredns-5dd5756b68-fgn7v" (UID: "7bc52797-b4bd-4046-b3d5-fae9c8ccd13b") : object "kube-system"/"coredns" not registered
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:34 multinode-642600 kubelet[1523]: E0318 12:43:34.955668    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:34 multinode-642600 kubelet[1523]: E0318 12:43:34.955718    1523 projected.go:198] Error preparing data for projected volume kube-api-access-9g8n5 for pod default/busybox-5b5d89c9d6-48qkw: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:34 multinode-642600 kubelet[1523]: E0318 12:43:34.955777    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5 podName:45969c0e-ac43-459e-95c0-86f7b76947db nodeName:}" failed. No retries permitted until 2024-03-18 12:43:38.955759519 +0000 UTC m=+14.966999155 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-9g8n5" (UniqueName: "kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5") pod "busybox-5b5d89c9d6-48qkw" (UID: "45969c0e-ac43-459e-95c0-86f7b76947db") : object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:35 multinode-642600 kubelet[1523]: E0318 12:43:35.360249    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:35 multinode-642600 kubelet[1523]: E0318 12:43:35.360337    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:37 multinode-642600 kubelet[1523]: E0318 12:43:37.360005    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:37 multinode-642600 kubelet[1523]: E0318 12:43:37.360005    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:38 multinode-642600 kubelet[1523]: E0318 12:43:38.890447    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:38 multinode-642600 kubelet[1523]: E0318 12:43:38.890642    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume podName:7bc52797-b4bd-4046-b3d5-fae9c8ccd13b nodeName:}" failed. No retries permitted until 2024-03-18 12:43:46.890560586 +0000 UTC m=+22.901800222 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume") pod "coredns-5dd5756b68-fgn7v" (UID: "7bc52797-b4bd-4046-b3d5-fae9c8ccd13b") : object "kube-system"/"coredns" not registered
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:38 multinode-642600 kubelet[1523]: E0318 12:43:38.991640    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:38 multinode-642600 kubelet[1523]: E0318 12:43:38.991754    1523 projected.go:198] Error preparing data for projected volume kube-api-access-9g8n5 for pod default/busybox-5b5d89c9d6-48qkw: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:38 multinode-642600 kubelet[1523]: E0318 12:43:38.991856    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5 podName:45969c0e-ac43-459e-95c0-86f7b76947db nodeName:}" failed. No retries permitted until 2024-03-18 12:43:46.991836746 +0000 UTC m=+23.003076482 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-9g8n5" (UniqueName: "kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5") pod "busybox-5b5d89c9d6-48qkw" (UID: "45969c0e-ac43-459e-95c0-86f7b76947db") : object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:39 multinode-642600 kubelet[1523]: E0318 12:43:39.360236    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:39 multinode-642600 kubelet[1523]: E0318 12:43:39.360508    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:39 multinode-642600 kubelet[1523]: E0318 12:43:39.425235    1523 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:41 multinode-642600 kubelet[1523]: E0318 12:43:41.360362    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:41 multinode-642600 kubelet[1523]: E0318 12:43:41.360863    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:43 multinode-642600 kubelet[1523]: E0318 12:43:43.359722    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:43 multinode-642600 kubelet[1523]: E0318 12:43:43.360308    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:44 multinode-642600 kubelet[1523]: E0318 12:43:44.438590    1523 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:45 multinode-642600 kubelet[1523]: E0318 12:43:45.360026    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:45 multinode-642600 kubelet[1523]: E0318 12:43:45.360101    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:46 multinode-642600 kubelet[1523]: E0318 12:43:46.970368    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:46 multinode-642600 kubelet[1523]: E0318 12:43:46.970583    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume podName:7bc52797-b4bd-4046-b3d5-fae9c8ccd13b nodeName:}" failed. No retries permitted until 2024-03-18 12:44:02.970562522 +0000 UTC m=+38.981802258 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume") pod "coredns-5dd5756b68-fgn7v" (UID: "7bc52797-b4bd-4046-b3d5-fae9c8ccd13b") : object "kube-system"/"coredns" not registered
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:47 multinode-642600 kubelet[1523]: E0318 12:43:47.071352    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:47 multinode-642600 kubelet[1523]: E0318 12:43:47.071390    1523 projected.go:198] Error preparing data for projected volume kube-api-access-9g8n5 for pod default/busybox-5b5d89c9d6-48qkw: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:47 multinode-642600 kubelet[1523]: E0318 12:43:47.071448    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5 podName:45969c0e-ac43-459e-95c0-86f7b76947db nodeName:}" failed. No retries permitted until 2024-03-18 12:44:03.071430219 +0000 UTC m=+39.082669855 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-9g8n5" (UniqueName: "kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5") pod "busybox-5b5d89c9d6-48qkw" (UID: "45969c0e-ac43-459e-95c0-86f7b76947db") : object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:47 multinode-642600 kubelet[1523]: E0318 12:43:47.359847    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:47 multinode-642600 kubelet[1523]: E0318 12:43:47.360318    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:49 multinode-642600 kubelet[1523]: E0318 12:43:49.360074    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:49 multinode-642600 kubelet[1523]: E0318 12:43:49.360604    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:49 multinode-642600 kubelet[1523]: E0318 12:43:49.453099    1523 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:51 multinode-642600 kubelet[1523]: E0318 12:43:51.360369    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:51 multinode-642600 kubelet[1523]: E0318 12:43:51.361016    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:53 multinode-642600 kubelet[1523]: E0318 12:43:53.359799    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:53 multinode-642600 kubelet[1523]: E0318 12:43:53.359935    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:54 multinode-642600 kubelet[1523]: E0318 12:43:54.467487    1523 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:55 multinode-642600 kubelet[1523]: E0318 12:43:55.359513    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:55 multinode-642600 kubelet[1523]: E0318 12:43:55.360047    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:57 multinode-642600 kubelet[1523]: E0318 12:43:57.359796    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:57 multinode-642600 kubelet[1523]: E0318 12:43:57.359970    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:43:59 multinode-642600 kubelet[1523]: E0318 12:43:59.360327    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:43:59 multinode-642600 kubelet[1523]: E0318 12:43:59.360455    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:43:59 multinode-642600 kubelet[1523]: E0318 12:43:59.483297    1523 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:44:01 multinode-642600 kubelet[1523]: E0318 12:44:01.359691    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:44:01 multinode-642600 kubelet[1523]: E0318 12:44:01.360228    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.032626    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.032722    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume podName:7bc52797-b4bd-4046-b3d5-fae9c8ccd13b nodeName:}" failed. No retries permitted until 2024-03-18 12:44:35.0327033 +0000 UTC m=+71.043942936 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume") pod "coredns-5dd5756b68-fgn7v" (UID: "7bc52797-b4bd-4046-b3d5-fae9c8ccd13b") : object "kube-system"/"coredns" not registered
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.134727    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.134857    1523 projected.go:198] Error preparing data for projected volume kube-api-access-9g8n5 for pod default/busybox-5b5d89c9d6-48qkw: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.135073    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5 podName:45969c0e-ac43-459e-95c0-86f7b76947db nodeName:}" failed. No retries permitted until 2024-03-18 12:44:35.13505028 +0000 UTC m=+71.146289916 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-9g8n5" (UniqueName: "kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5") pod "busybox-5b5d89c9d6-48qkw" (UID: "45969c0e-ac43-459e-95c0-86f7b76947db") : object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.360260    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.360354    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:44:04 multinode-642600 kubelet[1523]: I0318 12:44:04.124509    1523 scope.go:117] "RemoveContainer" containerID="996fb0f2ade69129acd747fc5146ef4295cc7ebd79cae8e8f881a21393ddb74a"
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:44:04 multinode-642600 kubelet[1523]: I0318 12:44:04.125880    1523 scope.go:117] "RemoveContainer" containerID="787ade2ea2cd019bbcde5523b750659f59b0187d9de4b757fba8a14f9a126460"
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:44:04 multinode-642600 kubelet[1523]: E0318 12:44:04.127355    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d2718b8a-26a9-4c86-bf9a-221d1ee23ceb)\"" pod="kube-system/storage-provisioner" podUID="d2718b8a-26a9-4c86-bf9a-221d1ee23ceb"
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:44:17 multinode-642600 kubelet[1523]: I0318 12:44:17.359956    1523 scope.go:117] "RemoveContainer" containerID="787ade2ea2cd019bbcde5523b750659f59b0187d9de4b757fba8a14f9a126460"
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:44:24 multinode-642600 kubelet[1523]: I0318 12:44:24.325657    1523 scope.go:117] "RemoveContainer" containerID="301c80f8b38cb79f051755af6af0fb604c0eee0689fd1f2d22a66e0969a9583f"
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:44:24 multinode-642600 kubelet[1523]: I0318 12:44:24.374630    1523 scope.go:117] "RemoveContainer" containerID="4b94d396876e5c7e3b8c69b01560d10ad95ff183ab3cc78a194276537cfd6cf5"
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:44:24 multinode-642600 kubelet[1523]: E0318 12:44:24.399375    1523 iptables.go:575] "Could not set up iptables canary" err=<
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:44:24 multinode-642600 kubelet[1523]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:44:24 multinode-642600 kubelet[1523]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:44:24 multinode-642600 kubelet[1523]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:44:24 multinode-642600 kubelet[1523]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 kubelet[1523]: I0318 12:44:35.962288    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1b2432b0ed66a1175586c13232eb9b9239f18a4f9a86e2a0c5f48c1407fdb14"
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:44:36 multinode-642600 kubelet[1523]: I0318 12:44:36.079817    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1090dd57409807a15613607fd810b67863a9dd9c5a8512d7a6720906641c7f26"
	I0318 12:44:43.178920    5712 logs.go:123] Gathering logs for dmesg ...
	I0318 12:44:43.178920    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 12:44:43.200748    5712 command_runner.go:130] > [Mar18 12:41] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0318 12:44:43.200883    5712 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0318 12:44:43.200883    5712 command_runner.go:130] > [  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0318 12:44:43.200993    5712 command_runner.go:130] > [  +0.129398] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0318 12:44:43.201042    5712 command_runner.go:130] > [  +0.023142] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0318 12:44:43.201042    5712 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0318 12:44:43.201042    5712 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0318 12:44:43.201042    5712 command_runner.go:130] > [  +0.067111] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0318 12:44:43.201042    5712 command_runner.go:130] > [  +0.023049] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0318 12:44:43.201042    5712 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0318 12:44:43.201042    5712 command_runner.go:130] > [  +5.633479] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0318 12:44:43.201042    5712 command_runner.go:130] > [  +0.746575] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0318 12:44:43.201042    5712 command_runner.go:130] > [  +1.948336] systemd-fstab-generator[113]: Ignoring "noauto" option for root device
	I0318 12:44:43.201042    5712 command_runner.go:130] > [  +7.356358] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0318 12:44:43.201042    5712 command_runner.go:130] > [  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0318 12:44:43.201042    5712 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0318 12:44:43.201042    5712 command_runner.go:130] > [Mar18 12:42] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	I0318 12:44:43.201042    5712 command_runner.go:130] > [  +0.196447] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	I0318 12:44:43.201042    5712 command_runner.go:130] > [Mar18 12:43] systemd-fstab-generator[969]: Ignoring "noauto" option for root device
	I0318 12:44:43.201042    5712 command_runner.go:130] > [  +0.116812] kauditd_printk_skb: 73 callbacks suppressed
	I0318 12:44:43.201042    5712 command_runner.go:130] > [  +0.565179] systemd-fstab-generator[1008]: Ignoring "noauto" option for root device
	I0318 12:44:43.201042    5712 command_runner.go:130] > [  +0.224131] systemd-fstab-generator[1020]: Ignoring "noauto" option for root device
	I0318 12:44:43.201042    5712 command_runner.go:130] > [  +0.243543] systemd-fstab-generator[1034]: Ignoring "noauto" option for root device
	I0318 12:44:43.201042    5712 command_runner.go:130] > [  +2.986318] systemd-fstab-generator[1219]: Ignoring "noauto" option for root device
	I0318 12:44:43.201042    5712 command_runner.go:130] > [  +0.197212] systemd-fstab-generator[1231]: Ignoring "noauto" option for root device
	I0318 12:44:43.201042    5712 command_runner.go:130] > [  +0.228503] systemd-fstab-generator[1243]: Ignoring "noauto" option for root device
	I0318 12:44:43.201042    5712 command_runner.go:130] > [  +0.297734] systemd-fstab-generator[1258]: Ignoring "noauto" option for root device
	I0318 12:44:43.202038    5712 command_runner.go:130] > [  +0.969011] systemd-fstab-generator[1381]: Ignoring "noauto" option for root device
	I0318 12:44:43.202129    5712 command_runner.go:130] > [  +0.114690] kauditd_printk_skb: 205 callbacks suppressed
	I0318 12:44:43.202129    5712 command_runner.go:130] > [  +3.575437] systemd-fstab-generator[1516]: Ignoring "noauto" option for root device
	I0318 12:44:43.202129    5712 command_runner.go:130] > [  +1.537938] kauditd_printk_skb: 44 callbacks suppressed
	I0318 12:44:43.202129    5712 command_runner.go:130] > [  +6.654182] kauditd_printk_skb: 30 callbacks suppressed
	I0318 12:44:43.202129    5712 command_runner.go:130] > [  +4.384606] systemd-fstab-generator[2563]: Ignoring "noauto" option for root device
	I0318 12:44:43.202129    5712 command_runner.go:130] > [  +7.200668] kauditd_printk_skb: 70 callbacks suppressed
	I0318 12:44:43.203821    5712 logs.go:123] Gathering logs for kube-scheduler [47777d4c0b90] ...
	I0318 12:44:43.203821    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47777d4c0b90"
	I0318 12:44:43.242083    5712 command_runner.go:130] ! I0318 12:18:43.828879       1 serving.go:348] Generated self-signed cert in-memory
	I0318 12:44:43.242083    5712 command_runner.go:130] ! W0318 12:18:46.562226       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0318 12:44:43.242083    5712 command_runner.go:130] ! W0318 12:18:46.562618       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 12:44:43.242083    5712 command_runner.go:130] ! W0318 12:18:46.562705       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0318 12:44:43.242083    5712 command_runner.go:130] ! W0318 12:18:46.562793       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 12:44:43.242083    5712 command_runner.go:130] ! I0318 12:18:46.615857       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0318 12:44:43.242083    5712 command_runner.go:130] ! I0318 12:18:46.615957       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:43.242083    5712 command_runner.go:130] ! I0318 12:18:46.622177       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 12:44:43.242083    5712 command_runner.go:130] ! I0318 12:18:46.622201       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 12:44:43.242083    5712 command_runner.go:130] ! I0318 12:18:46.625084       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0318 12:44:43.242083    5712 command_runner.go:130] ! I0318 12:18:46.625162       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 12:44:43.242083    5712 command_runner.go:130] ! W0318 12:18:46.631110       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 12:44:43.242083    5712 command_runner.go:130] ! E0318 12:18:46.631164       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 12:44:43.242083    5712 command_runner.go:130] ! W0318 12:18:46.634891       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 12:44:43.242610    5712 command_runner.go:130] ! E0318 12:18:46.634917       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 12:44:43.242610    5712 command_runner.go:130] ! W0318 12:18:46.636313       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0318 12:44:43.242655    5712 command_runner.go:130] ! E0318 12:18:46.638655       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0318 12:44:43.242690    5712 command_runner.go:130] ! W0318 12:18:46.636730       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:43.242690    5712 command_runner.go:130] ! E0318 12:18:46.639099       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:43.242822    5712 command_runner.go:130] ! W0318 12:18:46.636905       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:43.242869    5712 command_runner.go:130] ! E0318 12:18:46.639254       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:43.242869    5712 command_runner.go:130] ! W0318 12:18:46.636986       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:43.242938    5712 command_runner.go:130] ! E0318 12:18:46.639495       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:43.242965    5712 command_runner.go:130] ! W0318 12:18:46.641683       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 12:44:43.242965    5712 command_runner.go:130] ! E0318 12:18:46.641953       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 12:44:43.243040    5712 command_runner.go:130] ! W0318 12:18:46.642236       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 12:44:43.243061    5712 command_runner.go:130] ! E0318 12:18:46.642375       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 12:44:43.243117    5712 command_runner.go:130] ! W0318 12:18:46.642673       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 12:44:43.243154    5712 command_runner.go:130] ! W0318 12:18:46.646073       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 12:44:43.243154    5712 command_runner.go:130] ! E0318 12:18:46.647270       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 12:44:43.243237    5712 command_runner.go:130] ! W0318 12:18:46.646147       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 12:44:43.243277    5712 command_runner.go:130] ! E0318 12:18:46.647534       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 12:44:43.243326    5712 command_runner.go:130] ! W0318 12:18:46.646208       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:43.243363    5712 command_runner.go:130] ! E0318 12:18:46.647719       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:43.243419    5712 command_runner.go:130] ! W0318 12:18:46.646271       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 12:44:43.243463    5712 command_runner.go:130] ! E0318 12:18:46.647738       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 12:44:43.243463    5712 command_runner.go:130] ! W0318 12:18:46.646322       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 12:44:43.243463    5712 command_runner.go:130] ! E0318 12:18:46.647752       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 12:44:43.243568    5712 command_runner.go:130] ! E0318 12:18:46.647915       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! W0318 12:18:46.650301       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! E0318 12:18:46.650528       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! W0318 12:18:47.471960       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! E0318 12:18:47.472093       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! W0318 12:18:47.540921       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! E0318 12:18:47.541368       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! W0318 12:18:47.545171       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! E0318 12:18:47.546126       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! W0318 12:18:47.563772       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! E0318 12:18:47.563806       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! W0318 12:18:47.597770       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! E0318 12:18:47.597873       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! W0318 12:18:47.684794       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! E0318 12:18:47.685008       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! W0318 12:18:47.685352       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! E0318 12:18:47.685509       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! W0318 12:18:47.840132       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! E0318 12:18:47.840303       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! W0318 12:18:47.879838       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! E0318 12:18:47.880363       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! W0318 12:18:47.906171       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! E0318 12:18:47.906493       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! W0318 12:18:48.059997       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! E0318 12:18:48.060049       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! W0318 12:18:48.096160       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:43.244432    5712 command_runner.go:130] ! E0318 12:18:48.096304       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:43.244432    5712 command_runner.go:130] ! W0318 12:18:48.096504       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 12:44:43.244432    5712 command_runner.go:130] ! E0318 12:18:48.096662       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 12:44:43.244432    5712 command_runner.go:130] ! W0318 12:18:48.133175       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 12:44:43.244432    5712 command_runner.go:130] ! E0318 12:18:48.133469       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 12:44:43.244432    5712 command_runner.go:130] ! W0318 12:18:48.135066       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0318 12:44:43.244432    5712 command_runner.go:130] ! E0318 12:18:48.135196       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0318 12:44:43.244432    5712 command_runner.go:130] ! I0318 12:18:50.022459       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 12:44:43.244432    5712 command_runner.go:130] ! E0318 12:40:51.995231       1 run.go:74] "command failed" err="finished without leader elect"
	I0318 12:44:43.255393    5712 logs.go:123] Gathering logs for kube-proxy [4bbad08fe59a] ...
	I0318 12:44:43.255393    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbad08fe59a"
	I0318 12:44:43.288410    5712 command_runner.go:130] ! I0318 12:19:04.970720       1 server_others.go:69] "Using iptables proxy"
	I0318 12:44:43.288410    5712 command_runner.go:130] ! I0318 12:19:04.997380       1 node.go:141] Successfully retrieved node IP: 172.25.151.112
	I0318 12:44:43.288410    5712 command_runner.go:130] ! I0318 12:19:05.099028       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 12:44:43.288410    5712 command_runner.go:130] ! I0318 12:19:05.099065       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 12:44:43.288410    5712 command_runner.go:130] ! I0318 12:19:05.102885       1 server_others.go:152] "Using iptables Proxier"
	I0318 12:44:43.289495    5712 command_runner.go:130] ! I0318 12:19:05.103013       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 12:44:43.289550    5712 command_runner.go:130] ! I0318 12:19:05.103652       1 server.go:846] "Version info" version="v1.28.4"
	I0318 12:44:43.289550    5712 command_runner.go:130] ! I0318 12:19:05.103704       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:43.289550    5712 command_runner.go:130] ! I0318 12:19:05.105505       1 config.go:188] "Starting service config controller"
	I0318 12:44:43.289550    5712 command_runner.go:130] ! I0318 12:19:05.106093       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 12:44:43.289615    5712 command_runner.go:130] ! I0318 12:19:05.106131       1 config.go:97] "Starting endpoint slice config controller"
	I0318 12:44:43.289615    5712 command_runner.go:130] ! I0318 12:19:05.106138       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 12:44:43.289615    5712 command_runner.go:130] ! I0318 12:19:05.107424       1 config.go:315] "Starting node config controller"
	I0318 12:44:43.289681    5712 command_runner.go:130] ! I0318 12:19:05.107456       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 12:44:43.289681    5712 command_runner.go:130] ! I0318 12:19:05.206699       1 shared_informer.go:318] Caches are synced for service config
	I0318 12:44:43.289707    5712 command_runner.go:130] ! I0318 12:19:05.206811       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 12:44:43.289707    5712 command_runner.go:130] ! I0318 12:19:05.207857       1 shared_informer.go:318] Caches are synced for node config
	I0318 12:44:43.291854    5712 logs.go:123] Gathering logs for Docker ...
	I0318 12:44:43.291854    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 12:44:43.326122    5712 command_runner.go:130] > Mar 18 12:41:52 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 12:44:43.326122    5712 command_runner.go:130] > Mar 18 12:41:52 minikube cri-dockerd[219]: time="2024-03-18T12:41:52Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 12:44:43.326122    5712 command_runner.go:130] > Mar 18 12:41:52 minikube cri-dockerd[219]: time="2024-03-18T12:41:52Z" level=info msg="Start docker client with request timeout 0s"
	I0318 12:44:43.326122    5712 command_runner.go:130] > Mar 18 12:41:52 minikube cri-dockerd[219]: time="2024-03-18T12:41:52Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0318 12:44:43.326122    5712 command_runner.go:130] > Mar 18 12:41:52 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0318 12:44:43.326122    5712 command_runner.go:130] > Mar 18 12:41:52 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 12:44:43.326122    5712 command_runner.go:130] > Mar 18 12:41:52 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 12:44:43.326122    5712 command_runner.go:130] > Mar 18 12:41:55 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0318 12:44:43.326122    5712 command_runner.go:130] > Mar 18 12:41:55 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0318 12:44:43.326122    5712 command_runner.go:130] > Mar 18 12:41:55 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 12:44:43.326122    5712 command_runner.go:130] > Mar 18 12:41:55 minikube cri-dockerd[404]: time="2024-03-18T12:41:55Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 12:44:43.326122    5712 command_runner.go:130] > Mar 18 12:41:55 minikube cri-dockerd[404]: time="2024-03-18T12:41:55Z" level=info msg="Start docker client with request timeout 0s"
	I0318 12:44:43.326122    5712 command_runner.go:130] > Mar 18 12:41:55 minikube cri-dockerd[404]: time="2024-03-18T12:41:55Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0318 12:44:43.326122    5712 command_runner.go:130] > Mar 18 12:41:55 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0318 12:44:43.326122    5712 command_runner.go:130] > Mar 18 12:41:55 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 12:44:43.326122    5712 command_runner.go:130] > Mar 18 12:41:55 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 12:44:43.326122    5712 command_runner.go:130] > Mar 18 12:41:57 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0318 12:44:43.326650    5712 command_runner.go:130] > Mar 18 12:41:57 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0318 12:44:43.326650    5712 command_runner.go:130] > Mar 18 12:41:57 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 12:44:43.326650    5712 command_runner.go:130] > Mar 18 12:41:57 minikube cri-dockerd[424]: time="2024-03-18T12:41:57Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 12:44:43.326650    5712 command_runner.go:130] > Mar 18 12:41:57 minikube cri-dockerd[424]: time="2024-03-18T12:41:57Z" level=info msg="Start docker client with request timeout 0s"
	I0318 12:44:43.326650    5712 command_runner.go:130] > Mar 18 12:41:57 minikube cri-dockerd[424]: time="2024-03-18T12:41:57Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0318 12:44:43.326650    5712 command_runner.go:130] > Mar 18 12:41:57 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0318 12:44:43.326650    5712 command_runner.go:130] > Mar 18 12:41:57 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 12:44:43.326650    5712 command_runner.go:130] > Mar 18 12:41:57 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 12:44:43.326780    5712 command_runner.go:130] > Mar 18 12:41:59 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0318 12:44:43.326780    5712 command_runner.go:130] > Mar 18 12:41:59 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0318 12:44:43.326816    5712 command_runner.go:130] > Mar 18 12:41:59 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0318 12:44:43.326832    5712 command_runner.go:130] > Mar 18 12:41:59 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 12:44:43.326879    5712 command_runner.go:130] > Mar 18 12:41:59 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 12:44:43.326892    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 systemd[1]: Starting Docker Application Container Engine...
	I0318 12:44:43.326892    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[652]: time="2024-03-18T12:42:46.799415676Z" level=info msg="Starting up"
	I0318 12:44:43.326892    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[652]: time="2024-03-18T12:42:46.800442474Z" level=info msg="containerd not running, starting managed containerd"
	I0318 12:44:43.326892    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[652]: time="2024-03-18T12:42:46.801655972Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=658
	I0318 12:44:43.326960    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.836542309Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0318 12:44:43.326960    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.866837154Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0318 12:44:43.327005    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.866991653Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0318 12:44:43.327043    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.867166153Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0318 12:44:43.327043    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.867346253Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.868353051Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.868455451Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.868755450Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.868785850Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.868803850Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.868815950Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.869407649Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.870171948Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.873462742Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.873569242Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.873718241Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.873818241Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.874315040Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.874434440Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.874453940Z" level=info msg="metadata content store policy set" policy=shared
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880096930Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880252829Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880377329Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880397729Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880414329Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880488329Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880819128Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880926428Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0318 12:44:43.327620    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881236528Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0318 12:44:43.327620    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881376427Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0318 12:44:43.327620    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881400527Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0318 12:44:43.327620    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881426127Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0318 12:44:43.327620    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881441527Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0318 12:44:43.327620    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881474927Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0318 12:44:43.327620    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881491327Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0318 12:44:43.327752    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881506427Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0318 12:44:43.327784    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881521027Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0318 12:44:43.327784    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881536227Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0318 12:44:43.327784    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881566927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.327784    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881586627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.327864    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881601327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.327864    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881617327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.327864    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881631227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.327948    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881646527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.327948    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881659427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.327998    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881673727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.328021    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881757827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.328021    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881783527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.328049    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881798027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.328049    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881812927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.328049    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881826827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.328049    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881844827Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0318 12:44:43.328049    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881868126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.328049    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881889326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.328049    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881902926Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0318 12:44:43.328049    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882002626Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0318 12:44:43.328049    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882117726Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0318 12:44:43.328049    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882162226Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0318 12:44:43.328049    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882178726Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0318 12:44:43.328049    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882242626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.328049    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882337926Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0318 12:44:43.328049    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882358926Z" level=info msg="NRI interface is disabled by configuration."
	I0318 12:44:43.328049    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882603625Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0318 12:44:43.328049    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882759725Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0318 12:44:43.328049    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.883033524Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0318 12:44:43.328049    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.883153424Z" level=info msg="containerd successfully booted in 0.049971s"
	I0318 12:44:43.328049    5712 command_runner.go:130] > Mar 18 12:42:47 multinode-642600 dockerd[652]: time="2024-03-18T12:42:47.858472851Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0318 12:44:43.328049    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 dockerd[652]: time="2024-03-18T12:42:48.057442718Z" level=info msg="Loading containers: start."
	I0318 12:44:43.328583    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 dockerd[652]: time="2024-03-18T12:42:48.544395210Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0318 12:44:43.328583    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 dockerd[652]: time="2024-03-18T12:42:48.632442528Z" level=info msg="Loading containers: done."
	I0318 12:44:43.328583    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 dockerd[652]: time="2024-03-18T12:42:48.662805631Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	I0318 12:44:43.328583    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 dockerd[652]: time="2024-03-18T12:42:48.663682128Z" level=info msg="Daemon has completed initialization"
	I0318 12:44:43.328583    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 dockerd[652]: time="2024-03-18T12:42:48.725498031Z" level=info msg="API listen on [::]:2376"
	I0318 12:44:43.328583    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 systemd[1]: Started Docker Application Container Engine.
	I0318 12:44:43.328583    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 dockerd[652]: time="2024-03-18T12:42:48.725911430Z" level=info msg="API listen on /var/run/docker.sock"
	I0318 12:44:43.328711    5712 command_runner.go:130] > Mar 18 12:43:15 multinode-642600 systemd[1]: Stopping Docker Application Container Engine...
	I0318 12:44:43.328711    5712 command_runner.go:130] > Mar 18 12:43:15 multinode-642600 dockerd[652]: time="2024-03-18T12:43:15.631434936Z" level=info msg="Processing signal 'terminated'"
	I0318 12:44:43.328711    5712 command_runner.go:130] > Mar 18 12:43:15 multinode-642600 dockerd[652]: time="2024-03-18T12:43:15.633587433Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0318 12:44:43.328798    5712 command_runner.go:130] > Mar 18 12:43:15 multinode-642600 dockerd[652]: time="2024-03-18T12:43:15.634258932Z" level=info msg="Daemon shutdown complete"
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:15 multinode-642600 dockerd[652]: time="2024-03-18T12:43:15.634450831Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:15 multinode-642600 dockerd[652]: time="2024-03-18T12:43:15.634476831Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 systemd[1]: docker.service: Deactivated successfully.
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 systemd[1]: Stopped Docker Application Container Engine.
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 systemd[1]: Starting Docker Application Container Engine...
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:16.717087499Z" level=info msg="Starting up"
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:16.718262797Z" level=info msg="containerd not running, starting managed containerd"
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:16.719705495Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1048
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.754738639Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784193992Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784236292Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784275292Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784291492Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784317492Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784331992Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784550091Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784651691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784673391Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784704091Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784764391Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784996290Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.787641686Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.787744286Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.787950186Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.788044886Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.788091986Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.788127185Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.788138585Z" level=info msg="metadata content store policy set" policy=shared
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.789136284Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.789269784Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.789298984Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0318 12:44:43.329419    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.789320484Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0318 12:44:43.329419    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.789342084Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0318 12:44:43.329419    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.789644383Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0318 12:44:43.329419    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.790600382Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0318 12:44:43.329419    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.791760980Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0318 12:44:43.329419    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.791832280Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0318 12:44:43.329419    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.791851580Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0318 12:44:43.329576    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.791866579Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0318 12:44:43.329576    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.791880279Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.791969479Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.791989879Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792004479Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792018079Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792030379Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792042479Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792063279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792077879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792090579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792103979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792117779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792135679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792148379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792161279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792174179Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792188479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792199579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792211479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792223379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792238079Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792261579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792276079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792287879Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0318 12:44:43.330153    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792337479Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0318 12:44:43.330153    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792356479Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0318 12:44:43.330153    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792368079Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0318 12:44:43.330153    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792380379Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0318 12:44:43.330153    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792530178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.330153    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792576778Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0318 12:44:43.330153    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792591078Z" level=info msg="NRI interface is disabled by configuration."
	I0318 12:44:43.330360    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792811378Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0318 12:44:43.330515    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792927678Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0318 12:44:43.330537    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.793108678Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0318 12:44:43.330537    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.793160477Z" level=info msg="containerd successfully booted in 0.039931s"
	I0318 12:44:43.330607    5712 command_runner.go:130] > Mar 18 12:43:17 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:17.767243919Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0318 12:44:43.330607    5712 command_runner.go:130] > Mar 18 12:43:17 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:17.800090666Z" level=info msg="Loading containers: start."
	I0318 12:44:43.330607    5712 command_runner.go:130] > Mar 18 12:43:18 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:18.103803081Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0318 12:44:43.330607    5712 command_runner.go:130] > Mar 18 12:43:18 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:18.187726546Z" level=info msg="Loading containers: done."
	I0318 12:44:43.330672    5712 command_runner.go:130] > Mar 18 12:43:18 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:18.216487100Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	I0318 12:44:43.330672    5712 command_runner.go:130] > Mar 18 12:43:18 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:18.216648600Z" level=info msg="Daemon has completed initialization"
	I0318 12:44:43.330733    5712 command_runner.go:130] > Mar 18 12:43:18 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:18.271691012Z" level=info msg="API listen on /var/run/docker.sock"
	I0318 12:44:43.330733    5712 command_runner.go:130] > Mar 18 12:43:18 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:18.271966711Z" level=info msg="API listen on [::]:2376"
	I0318 12:44:43.330785    5712 command_runner.go:130] > Mar 18 12:43:18 multinode-642600 systemd[1]: Started Docker Application Container Engine.
	I0318 12:44:43.330785    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 12:44:43.330785    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 12:44:43.330785    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Start docker client with request timeout 0s"
	I0318 12:44:43.330785    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0318 12:44:43.330785    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Loaded network plugin cni"
	I0318 12:44:43.330785    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0318 12:44:43.330785    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Docker Info: &{ID:aa9100d3-1595-41ce-b36f-06932aef3ecb Containers:18 ContainersRunning:0 ContainersPaused:0 ContainersStopped:18 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:26 OomKillDisable:false NGoroutines:53 SystemTime:2024-03-18T12:43:19.415553382Z LoggingDriver:json-file CgroupDriver:cgroupfs CgroupVersion:2 NEventsListener:0 Ke
rnelVersion:5.10.207 OperatingSystem:Buildroot 2023.02.9 OSVersion:2023.02.9 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0002da070 NCPU:2 MemTotal:2216210432 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:multinode-642600 Labels:[provider=hyperv] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dcf2847247e18caba8dce86522029642f60fe96b Expected:dcf2847247e18caba8dce86522029642f60fe96b} RuncCommit:{ID:51d5e94601ceffbbd85688df1c928ecccbfa4685 Expected:51d5e94601ceffbbd85688df1c928ecccbfa4685} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[nam
e=seccomp,profile=builtin name=cgroupns] ProductLicense:Community Engine DefaultAddressPools:[] Warnings:[]}"
	I0318 12:44:43.330785    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0318 12:44:43.330785    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0318 12:44:43.330785    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0318 12:44:43.330785    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Start cri-dockerd grpc backend"
	I0318 12:44:43.330785    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0318 12:44:43.330785    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:24Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-5dd5756b68-fgn7v_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"ed38da653fbefea9aeb0ebdb91f985394a7a792571704a4875018f5a6bc9abda\""
	I0318 12:44:43.330785    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:24Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-5b5d89c9d6-48qkw_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"29bb4d534c2e2b00dfe907d4443637851e3c3348e31bf00939cd6efad71c4e2e\""
	I0318 12:44:43.330785    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.316277241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:43.330785    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.317878239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:43.330785    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.318571937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.330785    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.319101537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.330785    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.356638277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:43.330785    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.356750476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:43.331313    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.356767376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.331313    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.357118676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.331313    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.418245378Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:43.331313    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.421018274Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:43.331313    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.421217073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.331313    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.422102972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.331448    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.428274662Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:43.331511    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.428365762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:43.331548    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.428455862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.331548    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.428580261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.331548    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/67004ee038ee4247f6f751987304426067a63cee8c1636408dd16efea728ba78/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f62197122538f83943df8b19710794ea6ea9a9ffa884082a1a62435e9b152c3f/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/eca6768355c74817c50b811b96b5fcc93a181c4968c53d4d4b0d0252ff6dbd0a/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a7281d6e698ea2dc42d7d3093ccde32b770bf8367fdb58230694380f40daeb9f/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.879224940Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.879310840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.879325040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.879857239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.050226267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.051715465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.056267457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.056729856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.064877643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.065332743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.065495042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.065849742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.091573301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.091639201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.091652401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.091761800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:30 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:30Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.923135971Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.924017669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.924165569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.332136    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.924385369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.332136    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.955673419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:43.332275    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.955753819Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:43.332275    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.955772119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.956168818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.964148405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.964256705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.964669604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.964999404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7a2f0ccaf5c4c6c0019124eda20c358dfa8aa20f0c92ade10aa3de32608e3527/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/889c16eb0ab731956d02a28d0337dc6ff349dc574ba10d4fc1a939fb2e09d6d3/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.391303322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.391389722Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.391408822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.391535621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.413113087Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.413460286Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.413726486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.414492285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5ecbdcbdad3fa79af8ef70896ae67d65b14c47b5811078c5d6d167e0f295d1bc/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.850170088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.850431387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.850449987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.850590387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:03.011137468Z" level=info msg="shim disconnected" id=787ade2ea2cd019bbcde5523b750659f59b0187d9de4b757fba8a14f9a126460 namespace=moby
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:03.011334567Z" level=warning msg="cleaning up after shim disconnected" id=787ade2ea2cd019bbcde5523b750659f59b0187d9de4b757fba8a14f9a126460 namespace=moby
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:03.011364567Z" level=info msg="cleaning up dead shim" namespace=moby
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 dockerd[1042]: time="2024-03-18T12:44:03.012148165Z" level=info msg="ignoring event" container=787ade2ea2cd019bbcde5523b750659f59b0187d9de4b757fba8a14f9a126460 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:44:17 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:17.562340104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:44:17 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:17.562524303Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:44:17 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:17.562584503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:44:17 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:17.563253802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.376262769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.376780468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.377021468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.377223268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:44:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1090dd57409807a15613607fd810b67863a9dd9c5a8512d7a6720906641c7f26/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.684170919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.684458920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.684558520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.333178    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.685142822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.333178    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.901354745Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:43.333178    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.901518146Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:43.333178    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.901538746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.333178    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.901651446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.333374    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:44:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e1b2432b0ed66a1175586c13232eb9b9239f18a4f9a86e2a0c5f48c1407fdb14/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0318 12:44:43.333374    5712 command_runner.go:130] > Mar 18 12:44:36 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:36.227440411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:43.333374    5712 command_runner.go:130] > Mar 18 12:44:36 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:36.227939926Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:43.333432    5712 command_runner.go:130] > Mar 18 12:44:36 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:36.228081131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.333432    5712 command_runner.go:130] > Mar 18 12:44:36 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:36.228507343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.333432    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:43.333432    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:43.333432    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:43.333432    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:43.333432    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:43.333432    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:43.333432    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:43.333432    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:43.333432    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:43.333432    5712 command_runner.go:130] > Mar 18 12:44:40 multinode-642600 dockerd[1042]: 2024/03/18 12:44:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:43.333432    5712 command_runner.go:130] > Mar 18 12:44:40 multinode-642600 dockerd[1042]: 2024/03/18 12:44:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:43.333432    5712 command_runner.go:130] > Mar 18 12:44:40 multinode-642600 dockerd[1042]: 2024/03/18 12:44:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:43.333432    5712 command_runner.go:130] > Mar 18 12:44:43 multinode-642600 dockerd[1042]: 2024/03/18 12:44:43 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:43.333432    5712 command_runner.go:130] > Mar 18 12:44:43 multinode-642600 dockerd[1042]: 2024/03/18 12:44:43 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:43.365927    5712 logs.go:123] Gathering logs for describe nodes ...
	I0318 12:44:43.365927    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 12:44:43.545945    5712 command_runner.go:130] > Name:               multinode-642600
	I0318 12:44:43.545945    5712 command_runner.go:130] > Roles:              control-plane
	I0318 12:44:43.545945    5712 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0318 12:44:43.545945    5712 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0318 12:44:43.545945    5712 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0318 12:44:43.545945    5712 command_runner.go:130] >                     kubernetes.io/hostname=multinode-642600
	I0318 12:44:43.545945    5712 command_runner.go:130] >                     kubernetes.io/os=linux
	I0318 12:44:43.545945    5712 command_runner.go:130] >                     minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd
	I0318 12:44:43.545945    5712 command_runner.go:130] >                     minikube.k8s.io/name=multinode-642600
	I0318 12:44:43.545945    5712 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0318 12:44:43.545945    5712 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_18T12_18_52_0700
	I0318 12:44:43.545945    5712 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0318 12:44:43.545945    5712 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0318 12:44:43.545945    5712 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0318 12:44:43.545945    5712 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0318 12:44:43.545945    5712 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0318 12:44:43.545945    5712 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0318 12:44:43.545945    5712 command_runner.go:130] > CreationTimestamp:  Mon, 18 Mar 2024 12:18:46 +0000
	I0318 12:44:43.545945    5712 command_runner.go:130] > Taints:             <none>
	I0318 12:44:43.545945    5712 command_runner.go:130] > Unschedulable:      false
	I0318 12:44:43.545945    5712 command_runner.go:130] > Lease:
	I0318 12:44:43.545945    5712 command_runner.go:130] >   HolderIdentity:  multinode-642600
	I0318 12:44:43.545945    5712 command_runner.go:130] >   AcquireTime:     <unset>
	I0318 12:44:43.545945    5712 command_runner.go:130] >   RenewTime:       Mon, 18 Mar 2024 12:44:41 +0000
	I0318 12:44:43.545945    5712 command_runner.go:130] > Conditions:
	I0318 12:44:43.545945    5712 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0318 12:44:43.545945    5712 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0318 12:44:43.545945    5712 command_runner.go:130] >   MemoryPressure   False   Mon, 18 Mar 2024 12:44:11 +0000   Mon, 18 Mar 2024 12:18:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0318 12:44:43.545945    5712 command_runner.go:130] >   DiskPressure     False   Mon, 18 Mar 2024 12:44:11 +0000   Mon, 18 Mar 2024 12:18:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0318 12:44:43.545945    5712 command_runner.go:130] >   PIDPressure      False   Mon, 18 Mar 2024 12:44:11 +0000   Mon, 18 Mar 2024 12:18:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0318 12:44:43.545945    5712 command_runner.go:130] >   Ready            True    Mon, 18 Mar 2024 12:44:11 +0000   Mon, 18 Mar 2024 12:44:11 +0000   KubeletReady                 kubelet is posting ready status
	I0318 12:44:43.545945    5712 command_runner.go:130] > Addresses:
	I0318 12:44:43.545945    5712 command_runner.go:130] >   InternalIP:  172.25.148.129
	I0318 12:44:43.545945    5712 command_runner.go:130] >   Hostname:    multinode-642600
	I0318 12:44:43.545945    5712 command_runner.go:130] > Capacity:
	I0318 12:44:43.545945    5712 command_runner.go:130] >   cpu:                2
	I0318 12:44:43.545945    5712 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 12:44:43.545945    5712 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 12:44:43.545945    5712 command_runner.go:130] >   memory:             2164268Ki
	I0318 12:44:43.545945    5712 command_runner.go:130] >   pods:               110
	I0318 12:44:43.545945    5712 command_runner.go:130] > Allocatable:
	I0318 12:44:43.545945    5712 command_runner.go:130] >   cpu:                2
	I0318 12:44:43.545945    5712 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 12:44:43.545945    5712 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 12:44:43.545945    5712 command_runner.go:130] >   memory:             2164268Ki
	I0318 12:44:43.545945    5712 command_runner.go:130] >   pods:               110
	I0318 12:44:43.545945    5712 command_runner.go:130] > System Info:
	I0318 12:44:43.545945    5712 command_runner.go:130] >   Machine ID:                 021cb44913fc4689ab25739f723ae3da
	I0318 12:44:43.545945    5712 command_runner.go:130] >   System UUID:                8a1bcbab-f132-7f42-b33a-a7db97e0afe6
	I0318 12:44:43.545945    5712 command_runner.go:130] >   Boot ID:                    f11360a5-920e-4374-9d22-d06f111079d8
	I0318 12:44:43.545945    5712 command_runner.go:130] >   Kernel Version:             5.10.207
	I0318 12:44:43.546950    5712 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Operating System:           linux
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Architecture:               amd64
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0318 12:44:43.546950    5712 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0318 12:44:43.546950    5712 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0318 12:44:43.546950    5712 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0318 12:44:43.546950    5712 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0318 12:44:43.546950    5712 command_runner.go:130] >   default                     busybox-5b5d89c9d6-48qkw                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	I0318 12:44:43.546950    5712 command_runner.go:130] >   kube-system                 coredns-5dd5756b68-fgn7v                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     25m
	I0318 12:44:43.546950    5712 command_runner.go:130] >   kube-system                 etcd-multinode-642600                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         72s
	I0318 12:44:43.546950    5712 command_runner.go:130] >   kube-system                 kindnet-kpt4f                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      25m
	I0318 12:44:43.546950    5712 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-642600             250m (12%)    0 (0%)      0 (0%)           0 (0%)         72s
	I0318 12:44:43.546950    5712 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-642600    200m (10%)    0 (0%)      0 (0%)           0 (0%)         25m
	I0318 12:44:43.546950    5712 command_runner.go:130] >   kube-system                 kube-proxy-4dg79                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
	I0318 12:44:43.546950    5712 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-642600             100m (5%)     0 (0%)      0 (0%)           0 (0%)         25m
	I0318 12:44:43.546950    5712 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
	I0318 12:44:43.546950    5712 command_runner.go:130] > Allocated resources:
	I0318 12:44:43.546950    5712 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Resource           Requests     Limits
	I0318 12:44:43.546950    5712 command_runner.go:130] >   --------           --------     ------
	I0318 12:44:43.546950    5712 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0318 12:44:43.546950    5712 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0318 12:44:43.546950    5712 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0318 12:44:43.546950    5712 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0318 12:44:43.546950    5712 command_runner.go:130] > Events:
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0318 12:44:43.546950    5712 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Normal  Starting                 25m                kube-proxy       
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Normal  Starting                 69s                kube-proxy       
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Normal  Starting                 26m                kubelet          Starting kubelet.
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Normal  NodeHasSufficientMemory  26m (x8 over 26m)  kubelet          Node multinode-642600 status is now: NodeHasSufficientMemory
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    26m (x8 over 26m)  kubelet          Node multinode-642600 status is now: NodeHasNoDiskPressure
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Normal  NodeHasSufficientPID     26m (x7 over 26m)  kubelet          Node multinode-642600 status is now: NodeHasSufficientPID
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Normal  NodeAllocatableEnforced  26m                kubelet          Updated Node Allocatable limit across pods
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Normal  Starting                 25m                kubelet          Starting kubelet.
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Normal  NodeHasSufficientPID     25m                kubelet          Node multinode-642600 status is now: NodeHasSufficientPID
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    25m                kubelet          Node multinode-642600 status is now: NodeHasNoDiskPressure
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Normal  NodeAllocatableEnforced  25m                kubelet          Updated Node Allocatable limit across pods
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Normal  NodeHasSufficientMemory  25m                kubelet          Node multinode-642600 status is now: NodeHasSufficientMemory
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Normal  RegisteredNode           25m                node-controller  Node multinode-642600 event: Registered Node multinode-642600 in Controller
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Normal  NodeReady                25m                kubelet          Node multinode-642600 status is now: NodeReady
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Normal  Starting                 79s                kubelet          Starting kubelet.
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Normal  NodeHasSufficientMemory  79s (x8 over 79s)  kubelet          Node multinode-642600 status is now: NodeHasSufficientMemory
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    79s (x8 over 79s)  kubelet          Node multinode-642600 status is now: NodeHasNoDiskPressure
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Normal  NodeHasSufficientPID     79s (x7 over 79s)  kubelet          Node multinode-642600 status is now: NodeHasSufficientPID
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Normal  NodeAllocatableEnforced  79s                kubelet          Updated Node Allocatable limit across pods
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Normal  RegisteredNode           60s                node-controller  Node multinode-642600 event: Registered Node multinode-642600 in Controller
	I0318 12:44:43.574954    5712 command_runner.go:130] > Name:               multinode-642600-m02
	I0318 12:44:43.574978    5712 command_runner.go:130] > Roles:              <none>
	I0318 12:44:43.574978    5712 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0318 12:44:43.574978    5712 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0318 12:44:43.574978    5712 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0318 12:44:43.574978    5712 command_runner.go:130] >                     kubernetes.io/hostname=multinode-642600-m02
	I0318 12:44:43.574978    5712 command_runner.go:130] >                     kubernetes.io/os=linux
	I0318 12:44:43.574978    5712 command_runner.go:130] >                     minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd
	I0318 12:44:43.575084    5712 command_runner.go:130] >                     minikube.k8s.io/name=multinode-642600
	I0318 12:44:43.575084    5712 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0318 12:44:43.575084    5712 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_18T12_22_13_0700
	I0318 12:44:43.575084    5712 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0318 12:44:43.575168    5712 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0318 12:44:43.575168    5712 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0318 12:44:43.575168    5712 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0318 12:44:43.575168    5712 command_runner.go:130] > CreationTimestamp:  Mon, 18 Mar 2024 12:22:12 +0000
	I0318 12:44:43.575168    5712 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0318 12:44:43.575168    5712 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0318 12:44:43.575168    5712 command_runner.go:130] > Unschedulable:      false
	I0318 12:44:43.575275    5712 command_runner.go:130] > Lease:
	I0318 12:44:43.575275    5712 command_runner.go:130] >   HolderIdentity:  multinode-642600-m02
	I0318 12:44:43.575275    5712 command_runner.go:130] >   AcquireTime:     <unset>
	I0318 12:44:43.575275    5712 command_runner.go:130] >   RenewTime:       Mon, 18 Mar 2024 12:40:15 +0000
	I0318 12:44:43.575275    5712 command_runner.go:130] > Conditions:
	I0318 12:44:43.575351    5712 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0318 12:44:43.575374    5712 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0318 12:44:43.575400    5712 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 18 Mar 2024 12:38:34 +0000   Mon, 18 Mar 2024 12:44:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:43.575400    5712 command_runner.go:130] >   DiskPressure     Unknown   Mon, 18 Mar 2024 12:38:34 +0000   Mon, 18 Mar 2024 12:44:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:43.575400    5712 command_runner.go:130] >   PIDPressure      Unknown   Mon, 18 Mar 2024 12:38:34 +0000   Mon, 18 Mar 2024 12:44:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:43.575400    5712 command_runner.go:130] >   Ready            Unknown   Mon, 18 Mar 2024 12:38:34 +0000   Mon, 18 Mar 2024 12:44:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:43.575400    5712 command_runner.go:130] > Addresses:
	I0318 12:44:43.575400    5712 command_runner.go:130] >   InternalIP:  172.25.159.102
	I0318 12:44:43.575400    5712 command_runner.go:130] >   Hostname:    multinode-642600-m02
	I0318 12:44:43.575400    5712 command_runner.go:130] > Capacity:
	I0318 12:44:43.575400    5712 command_runner.go:130] >   cpu:                2
	I0318 12:44:43.575400    5712 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 12:44:43.575400    5712 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 12:44:43.575400    5712 command_runner.go:130] >   memory:             2164268Ki
	I0318 12:44:43.575400    5712 command_runner.go:130] >   pods:               110
	I0318 12:44:43.575400    5712 command_runner.go:130] > Allocatable:
	I0318 12:44:43.575400    5712 command_runner.go:130] >   cpu:                2
	I0318 12:44:43.575400    5712 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 12:44:43.575400    5712 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 12:44:43.575400    5712 command_runner.go:130] >   memory:             2164268Ki
	I0318 12:44:43.575400    5712 command_runner.go:130] >   pods:               110
	I0318 12:44:43.575400    5712 command_runner.go:130] > System Info:
	I0318 12:44:43.575400    5712 command_runner.go:130] >   Machine ID:                 3840c114554e41ff9ded1410244d8aba
	I0318 12:44:43.575400    5712 command_runner.go:130] >   System UUID:                23dbf5b1-f940-4749-8caf-1ae12d869a30
	I0318 12:44:43.575400    5712 command_runner.go:130] >   Boot ID:                    9a3fcab5-beb6-4505-b112-82809850bba3
	I0318 12:44:43.575400    5712 command_runner.go:130] >   Kernel Version:             5.10.207
	I0318 12:44:43.575400    5712 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0318 12:44:43.575400    5712 command_runner.go:130] >   Operating System:           linux
	I0318 12:44:43.575400    5712 command_runner.go:130] >   Architecture:               amd64
	I0318 12:44:43.575400    5712 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0318 12:44:43.575400    5712 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0318 12:44:43.575400    5712 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0318 12:44:43.575400    5712 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0318 12:44:43.575400    5712 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0318 12:44:43.575400    5712 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0318 12:44:43.575400    5712 command_runner.go:130] >   Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0318 12:44:43.575400    5712 command_runner.go:130] >   ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	I0318 12:44:43.575400    5712 command_runner.go:130] >   default                     busybox-5b5d89c9d6-hmhdf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	I0318 12:44:43.575400    5712 command_runner.go:130] >   kube-system                 kindnet-d5llj               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      22m
	I0318 12:44:43.575400    5712 command_runner.go:130] >   kube-system                 kube-proxy-vts9f            0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	I0318 12:44:43.575400    5712 command_runner.go:130] > Allocated resources:
	I0318 12:44:43.575400    5712 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0318 12:44:43.575400    5712 command_runner.go:130] >   Resource           Requests   Limits
	I0318 12:44:43.575400    5712 command_runner.go:130] >   --------           --------   ------
	I0318 12:44:43.575400    5712 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0318 12:44:43.575400    5712 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0318 12:44:43.575400    5712 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0318 12:44:43.575400    5712 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0318 12:44:43.575400    5712 command_runner.go:130] > Events:
	I0318 12:44:43.575400    5712 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0318 12:44:43.575400    5712 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0318 12:44:43.575400    5712 command_runner.go:130] >   Normal  Starting                 22m                kube-proxy       
	I0318 12:44:43.575400    5712 command_runner.go:130] >   Normal  NodeHasSufficientMemory  22m (x5 over 22m)  kubelet          Node multinode-642600-m02 status is now: NodeHasSufficientMemory
	I0318 12:44:43.575400    5712 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    22m (x5 over 22m)  kubelet          Node multinode-642600-m02 status is now: NodeHasNoDiskPressure
	I0318 12:44:43.575400    5712 command_runner.go:130] >   Normal  NodeHasSufficientPID     22m (x5 over 22m)  kubelet          Node multinode-642600-m02 status is now: NodeHasSufficientPID
	I0318 12:44:43.575933    5712 command_runner.go:130] >   Normal  RegisteredNode           22m                node-controller  Node multinode-642600-m02 event: Registered Node multinode-642600-m02 in Controller
	I0318 12:44:43.575933    5712 command_runner.go:130] >   Normal  NodeReady                22m                kubelet          Node multinode-642600-m02 status is now: NodeReady
	I0318 12:44:43.575933    5712 command_runner.go:130] >   Normal  RegisteredNode           60s                node-controller  Node multinode-642600-m02 event: Registered Node multinode-642600-m02 in Controller
	I0318 12:44:43.575933    5712 command_runner.go:130] >   Normal  NodeNotReady             20s                node-controller  Node multinode-642600-m02 status is now: NodeNotReady
	I0318 12:44:43.605589    5712 command_runner.go:130] > Name:               multinode-642600-m03
	I0318 12:44:43.605589    5712 command_runner.go:130] > Roles:              <none>
	I0318 12:44:43.605589    5712 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0318 12:44:43.605589    5712 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0318 12:44:43.605589    5712 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0318 12:44:43.605589    5712 command_runner.go:130] >                     kubernetes.io/hostname=multinode-642600-m03
	I0318 12:44:43.605589    5712 command_runner.go:130] >                     kubernetes.io/os=linux
	I0318 12:44:43.605589    5712 command_runner.go:130] >                     minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd
	I0318 12:44:43.605589    5712 command_runner.go:130] >                     minikube.k8s.io/name=multinode-642600
	I0318 12:44:43.605589    5712 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0318 12:44:43.605589    5712 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_18T12_38_47_0700
	I0318 12:44:43.605589    5712 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0318 12:44:43.605589    5712 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0318 12:44:43.605589    5712 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0318 12:44:43.605589    5712 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0318 12:44:43.605589    5712 command_runner.go:130] > CreationTimestamp:  Mon, 18 Mar 2024 12:38:46 +0000
	I0318 12:44:43.605589    5712 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0318 12:44:43.605589    5712 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0318 12:44:43.605589    5712 command_runner.go:130] > Unschedulable:      false
	I0318 12:44:43.605589    5712 command_runner.go:130] > Lease:
	I0318 12:44:43.605589    5712 command_runner.go:130] >   HolderIdentity:  multinode-642600-m03
	I0318 12:44:43.605589    5712 command_runner.go:130] >   AcquireTime:     <unset>
	I0318 12:44:43.605589    5712 command_runner.go:130] >   RenewTime:       Mon, 18 Mar 2024 12:39:48 +0000
	I0318 12:44:43.605589    5712 command_runner.go:130] > Conditions:
	I0318 12:44:43.605589    5712 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0318 12:44:43.605589    5712 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0318 12:44:43.605589    5712 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 18 Mar 2024 12:38:52 +0000   Mon, 18 Mar 2024 12:40:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:43.605589    5712 command_runner.go:130] >   DiskPressure     Unknown   Mon, 18 Mar 2024 12:38:52 +0000   Mon, 18 Mar 2024 12:40:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:43.605589    5712 command_runner.go:130] >   PIDPressure      Unknown   Mon, 18 Mar 2024 12:38:52 +0000   Mon, 18 Mar 2024 12:40:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:43.605589    5712 command_runner.go:130] >   Ready            Unknown   Mon, 18 Mar 2024 12:38:52 +0000   Mon, 18 Mar 2024 12:40:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:43.605589    5712 command_runner.go:130] > Addresses:
	I0318 12:44:43.605589    5712 command_runner.go:130] >   InternalIP:  172.25.157.200
	I0318 12:44:43.605589    5712 command_runner.go:130] >   Hostname:    multinode-642600-m03
	I0318 12:44:43.605589    5712 command_runner.go:130] > Capacity:
	I0318 12:44:43.605589    5712 command_runner.go:130] >   cpu:                2
	I0318 12:44:43.605589    5712 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 12:44:43.605589    5712 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 12:44:43.605589    5712 command_runner.go:130] >   memory:             2164268Ki
	I0318 12:44:43.605589    5712 command_runner.go:130] >   pods:               110
	I0318 12:44:43.605589    5712 command_runner.go:130] > Allocatable:
	I0318 12:44:43.605589    5712 command_runner.go:130] >   cpu:                2
	I0318 12:44:43.605589    5712 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 12:44:43.606611    5712 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 12:44:43.606611    5712 command_runner.go:130] >   memory:             2164268Ki
	I0318 12:44:43.606611    5712 command_runner.go:130] >   pods:               110
	I0318 12:44:43.606611    5712 command_runner.go:130] > System Info:
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Machine ID:                 b858c7f1c1bc42a69e1927ccc26ea5ce
	I0318 12:44:43.606611    5712 command_runner.go:130] >   System UUID:                8c4fd36f-ab8b-5447-9df2-542afafc5ab4
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Boot ID:                    cea0ecfe-24ab-4614-a808-1e2a7a960f26
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Kernel Version:             5.10.207
	I0318 12:44:43.606611    5712 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Operating System:           linux
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Architecture:               amd64
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0318 12:44:43.606611    5712 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0318 12:44:43.606611    5712 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0318 12:44:43.606611    5712 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0318 12:44:43.606611    5712 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0318 12:44:43.606611    5712 command_runner.go:130] >   kube-system                 kindnet-thkjp       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      17m
	I0318 12:44:43.606611    5712 command_runner.go:130] >   kube-system                 kube-proxy-khbjt    0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	I0318 12:44:43.606611    5712 command_runner.go:130] > Allocated resources:
	I0318 12:44:43.606611    5712 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Resource           Requests   Limits
	I0318 12:44:43.606611    5712 command_runner.go:130] >   --------           --------   ------
	I0318 12:44:43.606611    5712 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0318 12:44:43.606611    5712 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0318 12:44:43.606611    5712 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0318 12:44:43.606611    5712 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0318 12:44:43.606611    5712 command_runner.go:130] > Events:
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0318 12:44:43.606611    5712 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Normal  Starting                 17m                    kube-proxy       
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Normal  Starting                 5m54s                  kube-proxy       
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Normal  NodeHasSufficientMemory  17m (x5 over 17m)      kubelet          Node multinode-642600-m03 status is now: NodeHasSufficientMemory
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    17m (x5 over 17m)      kubelet          Node multinode-642600-m03 status is now: NodeHasNoDiskPressure
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Normal  NodeHasSufficientPID     17m (x5 over 17m)      kubelet          Node multinode-642600-m03 status is now: NodeHasSufficientPID
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Normal  NodeReady                17m                    kubelet          Node multinode-642600-m03 status is now: NodeReady
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Normal  Starting                 5m57s                  kubelet          Starting kubelet.
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m57s (x2 over 5m57s)  kubelet          Node multinode-642600-m03 status is now: NodeHasSufficientMemory
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m57s (x2 over 5m57s)  kubelet          Node multinode-642600-m03 status is now: NodeHasNoDiskPressure
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m57s (x2 over 5m57s)  kubelet          Node multinode-642600-m03 status is now: NodeHasSufficientPID
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m57s                  kubelet          Updated Node Allocatable limit across pods
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Normal  RegisteredNode           5m56s                  node-controller  Node multinode-642600-m03 event: Registered Node multinode-642600-m03 in Controller
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Normal  NodeReady                5m51s                  kubelet          Node multinode-642600-m03 status is now: NodeReady
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Normal  NodeNotReady             4m10s                  node-controller  Node multinode-642600-m03 status is now: NodeNotReady
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Normal  RegisteredNode           60s                    node-controller  Node multinode-642600-m03 event: Registered Node multinode-642600-m03 in Controller
	I0318 12:44:43.618596    5712 logs.go:123] Gathering logs for kube-apiserver [a48a6d310b86] ...
	I0318 12:44:43.618596    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a48a6d310b86"
	I0318 12:44:43.657706    5712 command_runner.go:130] ! I0318 12:43:26.873064       1 options.go:220] external host was not specified, using 172.25.148.129
	I0318 12:44:43.657808    5712 command_runner.go:130] ! I0318 12:43:26.879001       1 server.go:148] Version: v1.28.4
	I0318 12:44:43.657868    5712 command_runner.go:130] ! I0318 12:43:26.879883       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:43.657868    5712 command_runner.go:130] ! I0318 12:43:27.623853       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0318 12:44:43.657868    5712 command_runner.go:130] ! I0318 12:43:27.658081       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0318 12:44:43.657928    5712 command_runner.go:130] ! I0318 12:43:27.658128       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0318 12:44:43.657987    5712 command_runner.go:130] ! I0318 12:43:27.660963       1 instance.go:298] Using reconciler: lease
	I0318 12:44:43.657987    5712 command_runner.go:130] ! I0318 12:43:27.814829       1 handler.go:232] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0318 12:44:43.658018    5712 command_runner.go:130] ! W0318 12:43:27.815233       1 genericapiserver.go:744] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:43.658069    5712 command_runner.go:130] ! I0318 12:43:28.557814       1 handler.go:232] Adding GroupVersion  v1 to ResourceManager
	I0318 12:44:43.658069    5712 command_runner.go:130] ! I0318 12:43:28.558168       1 instance.go:709] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0318 12:44:43.658069    5712 command_runner.go:130] ! I0318 12:43:29.283146       1 instance.go:709] API group "resource.k8s.io" is not enabled, skipping.
	I0318 12:44:43.658121    5712 command_runner.go:130] ! I0318 12:43:29.346403       1 handler.go:232] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0318 12:44:43.658146    5712 command_runner.go:130] ! W0318 12:43:29.360856       1 genericapiserver.go:744] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:43.658146    5712 command_runner.go:130] ! W0318 12:43:29.360910       1 genericapiserver.go:744] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:43.658146    5712 command_runner.go:130] ! I0318 12:43:29.361419       1 handler.go:232] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0318 12:44:43.658217    5712 command_runner.go:130] ! W0318 12:43:29.361431       1 genericapiserver.go:744] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:43.658217    5712 command_runner.go:130] ! I0318 12:43:29.362356       1 handler.go:232] Adding GroupVersion autoscaling v2 to ResourceManager
	I0318 12:44:43.658217    5712 command_runner.go:130] ! I0318 12:43:29.365115       1 handler.go:232] Adding GroupVersion autoscaling v1 to ResourceManager
	I0318 12:44:43.658217    5712 command_runner.go:130] ! W0318 12:43:29.365134       1 genericapiserver.go:744] Skipping API autoscaling/v2beta1 because it has no resources.
	I0318 12:44:43.658277    5712 command_runner.go:130] ! W0318 12:43:29.365140       1 genericapiserver.go:744] Skipping API autoscaling/v2beta2 because it has no resources.
	I0318 12:44:43.658300    5712 command_runner.go:130] ! I0318 12:43:29.370774       1 handler.go:232] Adding GroupVersion batch v1 to ResourceManager
	I0318 12:44:43.658300    5712 command_runner.go:130] ! W0318 12:43:29.370809       1 genericapiserver.go:744] Skipping API batch/v1beta1 because it has no resources.
	I0318 12:44:43.658328    5712 command_runner.go:130] ! I0318 12:43:29.375063       1 handler.go:232] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0318 12:44:43.658328    5712 command_runner.go:130] ! W0318 12:43:29.375102       1 genericapiserver.go:744] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:43.658328    5712 command_runner.go:130] ! W0318 12:43:29.375108       1 genericapiserver.go:744] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:43.658328    5712 command_runner.go:130] ! I0318 12:43:29.375862       1 handler.go:232] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0318 12:44:43.658328    5712 command_runner.go:130] ! W0318 12:43:29.375929       1 genericapiserver.go:744] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:43.658328    5712 command_runner.go:130] ! W0318 12:43:29.375979       1 genericapiserver.go:744] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:43.658328    5712 command_runner.go:130] ! I0318 12:43:29.376693       1 handler.go:232] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0318 12:44:43.658328    5712 command_runner.go:130] ! I0318 12:43:29.384185       1 handler.go:232] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0318 12:44:43.658328    5712 command_runner.go:130] ! W0318 12:43:29.384228       1 genericapiserver.go:744] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:43.658328    5712 command_runner.go:130] ! W0318 12:43:29.384236       1 genericapiserver.go:744] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:43.658328    5712 command_runner.go:130] ! I0318 12:43:29.385110       1 handler.go:232] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0318 12:44:43.658328    5712 command_runner.go:130] ! W0318 12:43:29.385148       1 genericapiserver.go:744] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:43.658328    5712 command_runner.go:130] ! W0318 12:43:29.385155       1 genericapiserver.go:744] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:43.658328    5712 command_runner.go:130] ! I0318 12:43:29.388232       1 handler.go:232] Adding GroupVersion policy v1 to ResourceManager
	I0318 12:44:43.658328    5712 command_runner.go:130] ! W0318 12:43:29.388272       1 genericapiserver.go:744] Skipping API policy/v1beta1 because it has no resources.
	I0318 12:44:43.658328    5712 command_runner.go:130] ! I0318 12:43:29.392835       1 handler.go:232] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0318 12:44:43.658328    5712 command_runner.go:130] ! W0318 12:43:29.392872       1 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:43.658328    5712 command_runner.go:130] ! W0318 12:43:29.392880       1 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:43.658328    5712 command_runner.go:130] ! I0318 12:43:29.393504       1 handler.go:232] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0318 12:44:43.658328    5712 command_runner.go:130] ! W0318 12:43:29.393628       1 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:43.658328    5712 command_runner.go:130] ! W0318 12:43:29.393636       1 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:43.658328    5712 command_runner.go:130] ! I0318 12:43:29.401801       1 handler.go:232] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0318 12:44:43.658328    5712 command_runner.go:130] ! W0318 12:43:29.401838       1 genericapiserver.go:744] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:43.658328    5712 command_runner.go:130] ! W0318 12:43:29.401846       1 genericapiserver.go:744] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:43.658328    5712 command_runner.go:130] ! I0318 12:43:29.405508       1 handler.go:232] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0318 12:44:43.658328    5712 command_runner.go:130] ! I0318 12:43:29.409452       1 handler.go:232] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta2 to ResourceManager
	I0318 12:44:43.658328    5712 command_runner.go:130] ! W0318 12:43:29.409492       1 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:43.658328    5712 command_runner.go:130] ! W0318 12:43:29.409500       1 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:43.658328    5712 command_runner.go:130] ! I0318 12:43:29.421682       1 handler.go:232] Adding GroupVersion apps v1 to ResourceManager
	I0318 12:44:43.658328    5712 command_runner.go:130] ! W0318 12:43:29.421819       1 genericapiserver.go:744] Skipping API apps/v1beta2 because it has no resources.
	I0318 12:44:43.658328    5712 command_runner.go:130] ! W0318 12:43:29.421829       1 genericapiserver.go:744] Skipping API apps/v1beta1 because it has no resources.
	I0318 12:44:43.658870    5712 command_runner.go:130] ! I0318 12:43:29.426368       1 handler.go:232] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0318 12:44:43.658870    5712 command_runner.go:130] ! W0318 12:43:29.426405       1 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:43.658870    5712 command_runner.go:130] ! W0318 12:43:29.426413       1 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:43.658870    5712 command_runner.go:130] ! I0318 12:43:29.427337       1 handler.go:232] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0318 12:44:43.658870    5712 command_runner.go:130] ! W0318 12:43:29.427376       1 genericapiserver.go:744] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:43.658870    5712 command_runner.go:130] ! I0318 12:43:29.459555       1 handler.go:232] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0318 12:44:43.659009    5712 command_runner.go:130] ! W0318 12:43:29.459595       1 genericapiserver.go:744] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:43.659039    5712 command_runner.go:130] ! I0318 12:43:30.367734       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 12:44:43.659039    5712 command_runner.go:130] ! I0318 12:43:30.367932       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 12:44:43.659039    5712 command_runner.go:130] ! I0318 12:43:30.368782       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0318 12:44:43.659106    5712 command_runner.go:130] ! I0318 12:43:30.370542       1 secure_serving.go:213] Serving securely on [::]:8443
	I0318 12:44:43.659132    5712 command_runner.go:130] ! I0318 12:43:30.370628       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 12:44:43.659132    5712 command_runner.go:130] ! I0318 12:43:30.371667       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.372321       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.372682       1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.373559       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.373947       1 handler_discovery.go:412] Starting ResourceDiscoveryManager
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.374159       1 available_controller.go:423] Starting AvailableConditionController
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.374194       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.374404       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.374979       1 aggregator.go:164] waiting for initial CRD sync...
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.375087       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.375452       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.376837       1 controller.go:116] Starting legacy_token_tracking_controller
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.377105       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.377485       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.378013       1 controller.go:78] Starting OpenAPI AggregationController
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.378732       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.379224       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.379834       1 apf_controller.go:372] Starting API Priority and Fairness config controller
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.380470       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.380848       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.382047       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.382230       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.383964       1 controller.go:134] Starting OpenAPI controller
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.384158       1 controller.go:85] Starting OpenAPI V3 controller
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.384420       1 naming_controller.go:291] Starting NamingConditionController
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.384790       1 establishing_controller.go:76] Starting EstablishingController
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.385986       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.386163       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.386327       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.474963       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.476622       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.496736       1 shared_informer.go:318] Caches are synced for configmaps
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.497067       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.497511       1 aggregator.go:166] initial CRD sync complete...
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.498503       1 autoregister_controller.go:141] Starting autoregister controller
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.498662       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.498825       1 cache.go:39] Caches are synced for autoregister controller
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.570075       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0318 12:44:43.659695    5712 command_runner.go:130] ! I0318 12:43:30.585880       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0318 12:44:43.659695    5712 command_runner.go:130] ! I0318 12:43:30.624565       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0318 12:44:43.659695    5712 command_runner.go:130] ! I0318 12:43:30.681515       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0318 12:44:43.659695    5712 command_runner.go:130] ! I0318 12:43:30.681604       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0318 12:44:43.659695    5712 command_runner.go:130] ! I0318 12:43:31.410513       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0318 12:44:43.659787    5712 command_runner.go:130] ! W0318 12:43:31.917736       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.25.148.129 172.25.151.112]
	I0318 12:44:43.659787    5712 command_runner.go:130] ! I0318 12:43:31.919293       1 controller.go:624] quota admission added evaluator for: endpoints
	I0318 12:44:43.659787    5712 command_runner.go:130] ! I0318 12:43:31.929122       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0318 12:44:43.659787    5712 command_runner.go:130] ! I0318 12:43:34.160688       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0318 12:44:43.659787    5712 command_runner.go:130] ! I0318 12:43:34.367742       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0318 12:44:43.659864    5712 command_runner.go:130] ! I0318 12:43:34.406080       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0318 12:44:43.659890    5712 command_runner.go:130] ! I0318 12:43:34.542647       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0318 12:44:43.659890    5712 command_runner.go:130] ! I0318 12:43:34.562855       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0318 12:44:43.659919    5712 command_runner.go:130] ! W0318 12:43:51.920595       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.25.148.129]
	I0318 12:44:43.668353    5712 logs.go:123] Gathering logs for etcd [8e7911b58c58] ...
	I0318 12:44:43.668353    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7911b58c58"
	I0318 12:44:43.702589    5712 command_runner.go:130] ! {"level":"warn","ts":"2024-03-18T12:43:26.200481Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0318 12:44:43.702721    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.210029Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.25.148.129:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.25.148.129:2380","--initial-cluster=multinode-642600=https://172.25.148.129:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.25.148.129:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.25.148.129:2380","--name=multinode-642600","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0318 12:44:43.702836    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.210181Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0318 12:44:43.702836    5712 command_runner.go:130] ! {"level":"warn","ts":"2024-03-18T12:43:26.21031Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0318 12:44:43.702836    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.210331Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.25.148.129:2380"]}
	I0318 12:44:43.702979    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.210546Z","caller":"embed/etcd.go:495","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0318 12:44:43.703017    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.222773Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.25.148.129:2379"]}
	I0318 12:44:43.703138    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.228178Z","caller":"embed/etcd.go:309","msg":"starting an etcd server","etcd-version":"3.5.9","git-sha":"bdbbde998","go-version":"go1.19.9","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-642600","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.25.148.129:2380"],"listen-peer-urls":["https://172.25.148.129:2380"],"advertise-client-urls":["https://172.25.148.129:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.25.148.129:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"init
ial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0318 12:44:43.703167    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.271498Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"41.739133ms"}
	I0318 12:44:43.703167    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.299465Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0318 12:44:43.703167    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.319578Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"31713adf8492fbc4","local-member-id":"78764271becab2d0","commit-index":2138}
	I0318 12:44:43.703167    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.319995Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 switched to configuration voters=()"}
	I0318 12:44:43.703167    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.320107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 became follower at term 2"}
	I0318 12:44:43.703167    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.320138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 78764271becab2d0 [peers: [], term: 2, commit: 2138, applied: 0, lastindex: 2138, lastterm: 2]"}
	I0318 12:44:43.703167    5712 command_runner.go:130] ! {"level":"warn","ts":"2024-03-18T12:43:26.325366Z","caller":"auth/store.go:1238","msg":"simple token is not cryptographically signed"}
	I0318 12:44:43.703167    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.329191Z","caller":"mvcc/kvstore.go:323","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1388}
	I0318 12:44:43.703167    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.333388Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":1848}
	I0318 12:44:43.703167    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.357951Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0318 12:44:43.703167    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.372436Z","caller":"etcdserver/corrupt.go:95","msg":"starting initial corruption check","local-member-id":"78764271becab2d0","timeout":"7s"}
	I0318 12:44:43.703167    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.373126Z","caller":"etcdserver/corrupt.go:165","msg":"initial corruption checking passed; no corruption","local-member-id":"78764271becab2d0"}
	I0318 12:44:43.703167    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.373252Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"78764271becab2d0","local-server-version":"3.5.9","cluster-version":"to_be_decided"}
	I0318 12:44:43.703167    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.373688Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	I0318 12:44:43.703167    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.375391Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0318 12:44:43.703167    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.375647Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0318 12:44:43.703167    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.375735Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0318 12:44:43.703167    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.377469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 switched to configuration voters=(8680198388102902480)"}
	I0318 12:44:43.703167    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.377568Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"31713adf8492fbc4","local-member-id":"78764271becab2d0","added-peer-id":"78764271becab2d0","added-peer-peer-urls":["https://172.25.151.112:2380"]}
	I0318 12:44:43.703167    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.378749Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"31713adf8492fbc4","local-member-id":"78764271becab2d0","cluster-version":"3.5"}
	I0318 12:44:43.703167    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.378942Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0318 12:44:43.703712    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.380244Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0318 12:44:43.703756    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.380886Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"78764271becab2d0","initial-advertise-peer-urls":["https://172.25.148.129:2380"],"listen-peer-urls":["https://172.25.148.129:2380"],"advertise-client-urls":["https://172.25.148.129:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.25.148.129:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0318 12:44:43.703756    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.383141Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.25.148.129:2380"}
	I0318 12:44:43.703886    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.383279Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.25.148.129:2380"}
	I0318 12:44:43.703886    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.393018Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0318 12:44:43.703916    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.621966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 is starting a new election at term 2"}
	I0318 12:44:43.703916    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.622399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 became pre-candidate at term 2"}
	I0318 12:44:43.704057    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.622624Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 received MsgPreVoteResp from 78764271becab2d0 at term 2"}
	I0318 12:44:43.704057    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.622825Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 became candidate at term 3"}
	I0318 12:44:43.704057    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.624231Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 received MsgVoteResp from 78764271becab2d0 at term 3"}
	I0318 12:44:43.704057    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.624426Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 became leader at term 3"}
	I0318 12:44:43.704117    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.624696Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 78764271becab2d0 elected leader 78764271becab2d0 at term 3"}
	I0318 12:44:43.704117    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.641347Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"78764271becab2d0","local-member-attributes":"{Name:multinode-642600 ClientURLs:[https://172.25.148.129:2379]}","request-path":"/0/members/78764271becab2d0/attributes","cluster-id":"31713adf8492fbc4","publish-timeout":"7s"}
	I0318 12:44:43.704117    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.641882Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0318 12:44:43.704117    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.64409Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0318 12:44:43.704117    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.644373Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0318 12:44:43.704117    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.641995Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0318 12:44:43.704117    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.650212Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.25.148.129:2379"}
	I0318 12:44:43.704117    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.651053Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0318 12:44:43.711156    5712 logs.go:123] Gathering logs for kube-controller-manager [14ae9398d33b] ...
	I0318 12:44:43.711212    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ae9398d33b"
	I0318 12:44:43.742072    5712 command_runner.go:130] ! I0318 12:43:27.406049       1 serving.go:348] Generated self-signed cert in-memory
	I0318 12:44:43.742072    5712 command_runner.go:130] ! I0318 12:43:29.733819       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0318 12:44:43.742881    5712 command_runner.go:130] ! I0318 12:43:29.734137       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:43.743071    5712 command_runner.go:130] ! I0318 12:43:29.737351       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 12:44:43.743071    5712 command_runner.go:130] ! I0318 12:43:29.737598       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 12:44:43.743071    5712 command_runner.go:130] ! I0318 12:43:29.739365       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0318 12:44:43.743071    5712 command_runner.go:130] ! I0318 12:43:29.740428       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 12:44:43.743071    5712 command_runner.go:130] ! I0318 12:43:32.581261       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0318 12:44:43.743071    5712 command_runner.go:130] ! I0318 12:43:32.597867       1 controllermanager.go:642] "Started controller" controller="persistentvolume-binder-controller"
	I0318 12:44:43.743163    5712 command_runner.go:130] ! I0318 12:43:32.602078       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0318 12:44:43.743163    5712 command_runner.go:130] ! I0318 12:43:32.602099       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0318 12:44:43.743163    5712 command_runner.go:130] ! I0318 12:43:32.605600       1 controllermanager.go:642] "Started controller" controller="persistentvolume-expander-controller"
	I0318 12:44:43.743263    5712 command_runner.go:130] ! I0318 12:43:32.605807       1 expand_controller.go:328] "Starting expand controller"
	I0318 12:44:43.744074    5712 command_runner.go:130] ! I0318 12:43:32.605957       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0318 12:44:43.744565    5712 command_runner.go:130] ! I0318 12:43:32.620725       1 controllermanager.go:642] "Started controller" controller="persistentvolume-protection-controller"
	I0318 12:44:43.744565    5712 command_runner.go:130] ! I0318 12:43:32.621286       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0318 12:44:43.744565    5712 command_runner.go:130] ! I0318 12:43:32.621374       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0318 12:44:43.744565    5712 command_runner.go:130] ! I0318 12:43:32.663010       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I0318 12:44:43.744565    5712 command_runner.go:130] ! I0318 12:43:32.663383       1 namespace_controller.go:197] "Starting namespace controller"
	I0318 12:44:43.744678    5712 command_runner.go:130] ! I0318 12:43:32.663451       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0318 12:44:43.744678    5712 command_runner.go:130] ! I0318 12:43:32.674431       1 controllermanager.go:642] "Started controller" controller="deployment-controller"
	I0318 12:44:43.744678    5712 command_runner.go:130] ! I0318 12:43:32.675030       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0318 12:44:43.744678    5712 command_runner.go:130] ! I0318 12:43:32.675045       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0318 12:44:43.744678    5712 command_runner.go:130] ! I0318 12:43:32.680220       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0318 12:44:43.744678    5712 command_runner.go:130] ! I0318 12:43:32.680236       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0318 12:44:43.744678    5712 command_runner.go:130] ! I0318 12:43:32.680266       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:43.744678    5712 command_runner.go:130] ! I0318 12:43:32.681919       1 shared_informer.go:318] Caches are synced for tokens
	I0318 12:44:43.744678    5712 command_runner.go:130] ! I0318 12:43:32.684132       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0318 12:44:43.744678    5712 command_runner.go:130] ! I0318 12:43:32.684147       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0318 12:44:43.744678    5712 command_runner.go:130] ! I0318 12:43:32.684164       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:43.744678    5712 command_runner.go:130] ! I0318 12:43:32.685811       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0318 12:44:43.744678    5712 command_runner.go:130] ! I0318 12:43:32.685845       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0318 12:44:43.744678    5712 command_runner.go:130] ! I0318 12:43:32.686123       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:43.744678    5712 command_runner.go:130] ! I0318 12:43:32.687526       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0318 12:44:43.744678    5712 command_runner.go:130] ! I0318 12:43:32.687845       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0318 12:44:43.744678    5712 command_runner.go:130] ! I0318 12:43:32.687858       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0318 12:44:43.744678    5712 command_runner.go:130] ! I0318 12:43:32.687918       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:43.744678    5712 command_runner.go:130] ! I0318 12:43:32.691958       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0318 12:44:43.744678    5712 command_runner.go:130] ! I0318 12:43:32.692673       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0318 12:44:43.744678    5712 command_runner.go:130] ! I0318 12:43:32.696192       1 controllermanager.go:642] "Started controller" controller="bootstrap-signer-controller"
	I0318 12:44:43.744678    5712 command_runner.go:130] ! I0318 12:43:32.696622       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0318 12:44:43.745228    5712 command_runner.go:130] ! I0318 12:43:32.701031       1 controllermanager.go:642] "Started controller" controller="token-cleaner-controller"
	I0318 12:44:43.745228    5712 command_runner.go:130] ! I0318 12:43:32.701415       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0318 12:44:43.745293    5712 command_runner.go:130] ! I0318 12:43:32.701449       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0318 12:44:43.745293    5712 command_runner.go:130] ! I0318 12:43:32.701458       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0318 12:44:43.745293    5712 command_runner.go:130] ! E0318 12:43:32.705162       1 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0318 12:44:43.745293    5712 command_runner.go:130] ! I0318 12:43:32.705349       1 controllermanager.go:620] "Warning: skipping controller" controller="service-lb-controller"
	I0318 12:44:43.745400    5712 command_runner.go:130] ! I0318 12:43:32.705364       1 core.go:228] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0318 12:44:43.745400    5712 command_runner.go:130] ! I0318 12:43:32.705376       1 controllermanager.go:620] "Warning: skipping controller" controller="node-route-controller"
	I0318 12:44:43.745400    5712 command_runner.go:130] ! I0318 12:43:32.750736       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0318 12:44:43.745463    5712 command_runner.go:130] ! I0318 12:43:32.751361       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0318 12:44:43.745486    5712 command_runner.go:130] ! W0318 12:43:32.751515       1 shared_informer.go:593] resyncPeriod 19h34m1.540802039s is smaller than resyncCheckPeriod 20h12m46.622656472s and the informer has already started. Changing it to 20h12m46.622656472s
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.752012       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.752529       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.752719       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.752884       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.753191       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.753284       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.753677       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.753791       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.753884       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.754036       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.754202       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.754691       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.755001       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.755205       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.755784       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.755974       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.756144       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.756649       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.756826       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.757119       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.757364       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.757580       1 controllermanager.go:642] "Started controller" controller="resourcequota-controller"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! E0318 12:43:32.773718       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.773746       1 controllermanager.go:620] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.786590       1 controllermanager.go:642] "Started controller" controller="ephemeral-volume-controller"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.786978       1 controller.go:169] "Starting ephemeral volume controller"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.787007       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.795770       1 controllermanager.go:642] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.798452       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.798585       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.801712       1 controllermanager.go:642] "Started controller" controller="serviceaccount-controller"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.802261       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0318 12:44:43.746057    5712 command_runner.go:130] ! I0318 12:43:32.806063       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0318 12:44:43.746057    5712 command_runner.go:130] ! I0318 12:43:32.823560       1 controllermanager.go:642] "Started controller" controller="garbage-collector-controller"
	I0318 12:44:43.746057    5712 command_runner.go:130] ! I0318 12:43:32.823578       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0318 12:44:43.746121    5712 command_runner.go:130] ! I0318 12:43:32.823595       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 12:44:43.746121    5712 command_runner.go:130] ! I0318 12:43:32.823621       1 graph_builder.go:294] "Running" component="GraphBuilder"
	I0318 12:44:43.746121    5712 command_runner.go:130] ! I0318 12:43:32.833033       1 controllermanager.go:642] "Started controller" controller="daemonset-controller"
	I0318 12:44:43.746121    5712 command_runner.go:130] ! I0318 12:43:32.833480       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0318 12:44:43.746121    5712 command_runner.go:130] ! I0318 12:43:32.833494       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0318 12:44:43.746121    5712 command_runner.go:130] ! I0318 12:43:32.862160       1 node_lifecycle_controller.go:431] "Controller will reconcile labels"
	I0318 12:44:43.746212    5712 command_runner.go:130] ! I0318 12:43:32.862209       1 controllermanager.go:642] "Started controller" controller="node-lifecycle-controller"
	I0318 12:44:43.746212    5712 command_runner.go:130] ! I0318 12:43:32.862524       1 node_lifecycle_controller.go:465] "Sending events to api server"
	I0318 12:44:43.746212    5712 command_runner.go:130] ! I0318 12:43:32.862562       1 node_lifecycle_controller.go:476] "Starting node controller"
	I0318 12:44:43.746284    5712 command_runner.go:130] ! I0318 12:43:32.862573       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0318 12:44:43.746303    5712 command_runner.go:130] ! I0318 12:43:32.883369       1 controllermanager.go:642] "Started controller" controller="endpointslice-mirroring-controller"
	I0318 12:44:43.746303    5712 command_runner.go:130] ! I0318 12:43:32.886141       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0318 12:44:43.746303    5712 command_runner.go:130] ! I0318 12:43:32.886674       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0318 12:44:43.746303    5712 command_runner.go:130] ! I0318 12:43:32.896468       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I0318 12:44:43.746369    5712 command_runner.go:130] ! I0318 12:43:32.896951       1 stateful_set.go:161] "Starting stateful set controller"
	I0318 12:44:43.746369    5712 command_runner.go:130] ! I0318 12:43:32.897135       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0318 12:44:43.746369    5712 command_runner.go:130] ! I0318 12:43:32.900325       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0318 12:44:43.746369    5712 command_runner.go:130] ! I0318 12:43:32.900580       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0318 12:44:43.746369    5712 command_runner.go:130] ! I0318 12:43:32.903531       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0318 12:44:43.746369    5712 command_runner.go:130] ! I0318 12:43:32.917793       1 controllermanager.go:642] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0318 12:44:43.746369    5712 command_runner.go:130] ! I0318 12:43:32.918152       1 horizontal.go:200] "Starting HPA controller"
	I0318 12:44:43.746369    5712 command_runner.go:130] ! I0318 12:43:32.918638       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0318 12:44:43.746369    5712 command_runner.go:130] ! I0318 12:43:32.920489       1 controllermanager.go:642] "Started controller" controller="pod-garbage-collector-controller"
	I0318 12:44:43.746369    5712 command_runner.go:130] ! I0318 12:43:32.920802       1 gc_controller.go:101] "Starting GC controller"
	I0318 12:44:43.746369    5712 command_runner.go:130] ! I0318 12:43:32.922940       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0318 12:44:43.746369    5712 command_runner.go:130] ! I0318 12:43:32.923834       1 controllermanager.go:642] "Started controller" controller="endpointslice-controller"
	I0318 12:44:43.746529    5712 command_runner.go:130] ! I0318 12:43:32.924143       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0318 12:44:43.746529    5712 command_runner.go:130] ! I0318 12:43:32.924461       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0318 12:44:43.746529    5712 command_runner.go:130] ! I0318 12:43:32.935394       1 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
	I0318 12:44:43.746529    5712 command_runner.go:130] ! I0318 12:43:32.935610       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0318 12:44:43.746529    5712 command_runner.go:130] ! I0318 12:43:32.935623       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0318 12:44:43.746603    5712 command_runner.go:130] ! I0318 12:43:32.996434       1 controllermanager.go:642] "Started controller" controller="job-controller"
	I0318 12:44:43.746653    5712 command_runner.go:130] ! I0318 12:43:32.996586       1 job_controller.go:226] "Starting job controller"
	I0318 12:44:43.746670    5712 command_runner.go:130] ! I0318 12:43:32.996666       1 shared_informer.go:311] Waiting for caches to sync for job
	I0318 12:44:43.746670    5712 command_runner.go:130] ! I0318 12:43:33.085354       1 controllermanager.go:642] "Started controller" controller="disruption-controller"
	I0318 12:44:43.746670    5712 command_runner.go:130] ! I0318 12:43:33.086157       1 disruption.go:433] "Sending events to api server."
	I0318 12:44:43.746670    5712 command_runner.go:130] ! I0318 12:43:33.086235       1 disruption.go:444] "Starting disruption controller"
	I0318 12:44:43.746727    5712 command_runner.go:130] ! I0318 12:43:33.086245       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0318 12:44:43.746751    5712 command_runner.go:130] ! I0318 12:43:33.141477       1 controllermanager.go:642] "Started controller" controller="cronjob-controller"
	I0318 12:44:43.746751    5712 command_runner.go:130] ! I0318 12:43:33.142359       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0318 12:44:43.746751    5712 command_runner.go:130] ! I0318 12:43:33.142566       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0318 12:44:43.746751    5712 command_runner.go:130] ! I0318 12:43:33.186973       1 controllermanager.go:642] "Started controller" controller="endpoints-controller"
	I0318 12:44:43.746751    5712 command_runner.go:130] ! I0318 12:43:33.187335       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0318 12:44:43.746751    5712 command_runner.go:130] ! I0318 12:43:33.187410       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0318 12:44:43.746751    5712 command_runner.go:130] ! I0318 12:43:33.236517       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0318 12:44:43.746751    5712 command_runner.go:130] ! I0318 12:43:33.236982       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0318 12:44:43.746866    5712 command_runner.go:130] ! I0318 12:43:33.237471       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0318 12:44:43.746866    5712 command_runner.go:130] ! I0318 12:43:33.286539       1 controllermanager.go:642] "Started controller" controller="ttl-controller"
	I0318 12:44:43.746866    5712 command_runner.go:130] ! I0318 12:43:33.287154       1 ttl_controller.go:124] "Starting TTL controller"
	I0318 12:44:43.746866    5712 command_runner.go:130] ! I0318 12:43:33.287375       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0318 12:44:43.746933    5712 command_runner.go:130] ! I0318 12:43:43.355688       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0318 12:44:43.746957    5712 command_runner.go:130] ! I0318 12:43:43.355845       1 controllermanager.go:642] "Started controller" controller="node-ipam-controller"
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.356879       1 node_ipam_controller.go:162] "Starting ipam controller"
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.357033       1 shared_informer.go:311] Waiting for caches to sync for node
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.359716       1 controllermanager.go:642] "Started controller" controller="clusterrole-aggregation-controller"
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.361043       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.361062       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.364706       1 controllermanager.go:642] "Started controller" controller="ttl-after-finished-controller"
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.364861       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.364989       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.369492       1 controllermanager.go:642] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.369675       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.369706       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.375944       1 controllermanager.go:642] "Started controller" controller="replicaset-controller"
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.376145       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.377600       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.390058       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.405940       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600\" does not exist"
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.408115       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.408433       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.408623       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600-m02\" does not exist"
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.408708       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600-m03\" does not exist"
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.408817       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.421506       1 shared_informer.go:318] Caches are synced for PV protection
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.446678       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.459596       1 shared_informer.go:318] Caches are synced for node
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.459833       1 range_allocator.go:174] "Sending events to api server"
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.460258       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.460829       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0318 12:44:43.747521    5712 command_runner.go:130] ! I0318 12:43:43.461091       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0318 12:44:43.747521    5712 command_runner.go:130] ! I0318 12:43:43.461418       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0318 12:44:43.747521    5712 command_runner.go:130] ! I0318 12:43:43.463618       1 shared_informer.go:318] Caches are synced for namespace
	I0318 12:44:43.747566    5712 command_runner.go:130] ! I0318 12:43:43.466097       1 shared_informer.go:318] Caches are synced for taint
	I0318 12:44:43.747566    5712 command_runner.go:130] ! I0318 12:43:43.466427       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0318 12:44:43.747566    5712 command_runner.go:130] ! I0318 12:43:43.466639       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0318 12:44:43.747566    5712 command_runner.go:130] ! I0318 12:43:43.466863       1 taint_manager.go:210] "Sending events to api server"
	I0318 12:44:43.747566    5712 command_runner.go:130] ! I0318 12:43:43.468821       1 event.go:307] "Event occurred" object="multinode-642600" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600 event: Registered Node multinode-642600 in Controller"
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.469328       1 event.go:307] "Event occurred" object="multinode-642600-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600-m02 event: Registered Node multinode-642600-m02 in Controller"
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.469579       1 event.go:307] "Event occurred" object="multinode-642600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600-m03 event: Registered Node multinode-642600-m03 in Controller"
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.469959       1 shared_informer.go:318] Caches are synced for crt configmap
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.477268       1 shared_informer.go:318] Caches are synced for deployment
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.486297       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.487082       1 shared_informer.go:318] Caches are synced for ephemeral
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.487171       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.487768       1 shared_informer.go:318] Caches are synced for TTL
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.487848       1 shared_informer.go:318] Caches are synced for endpoint
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.489265       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.497682       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.498610       1 shared_informer.go:318] Caches are synced for stateful set
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.498725       1 shared_informer.go:318] Caches are synced for attach detach
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.501123       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-642600"
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.503362       1 shared_informer.go:318] Caches are synced for persistent volume
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.505991       1 shared_informer.go:318] Caches are synced for expand
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.503938       1 shared_informer.go:318] Caches are synced for PVC protection
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.506104       1 shared_informer.go:318] Caches are synced for service account
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.505782       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-642600-m02"
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.505818       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-642600-m03"
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.506356       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.521010       1 shared_informer.go:318] Caches are synced for HPA
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.524230       1 shared_informer.go:318] Caches are synced for GC
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.527081       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.534422       1 shared_informer.go:318] Caches are synced for daemon sets
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.537721       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.545260       1 shared_informer.go:318] Caches are synced for cronjob
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.546769       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="57.454588ms"
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.547853       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="57.476888ms"
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.552128       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="66µs"
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.552429       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="130.199µs"
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.565701       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.580927       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.585098       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.586663       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.590461       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.597830       1 shared_informer.go:318] Caches are synced for job
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.635734       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.658493       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.686534       1 shared_informer.go:318] Caches are synced for disruption
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:44.024395       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:44.024760       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:44.048280       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:44:11.303411       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:44:13.533509       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-48qkw" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-48qkw"
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:44:13.534203       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68-fgn7v" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-5dd5756b68-fgn7v"
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:44:13.534478       1 event.go:307] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:44:23.562573       1 event.go:307] "Event occurred" object="multinode-642600-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-642600-m02 status is now: NodeNotReady"
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:44:23.591486       1 event.go:307] "Event occurred" object="kube-system/kindnet-d5llj" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:44:23.614671       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-vts9f" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:44:23.639496       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-hmhdf" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:44:23.661949       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="21.740356ms"
	I0318 12:44:43.748571    5712 command_runner.go:130] ! I0318 12:44:23.663289       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="50.499µs"
	I0318 12:44:43.748571    5712 command_runner.go:130] ! I0318 12:44:37.149797       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="76.1µs"
	I0318 12:44:43.748571    5712 command_runner.go:130] ! I0318 12:44:37.209300       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="28.125704ms"
	I0318 12:44:43.748571    5712 command_runner.go:130] ! I0318 12:44:37.209415       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="54.4µs"
	I0318 12:44:43.748571    5712 command_runner.go:130] ! I0318 12:44:37.245284       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="23.227968ms"
	I0318 12:44:43.748571    5712 command_runner.go:130] ! I0318 12:44:37.254358       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="3.872028ms"
	I0318 12:44:43.762553    5712 logs.go:123] Gathering logs for kube-controller-manager [a54be4436901] ...
	I0318 12:44:43.762553    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54be4436901"
	I0318 12:44:43.792568    5712 command_runner.go:130] ! I0318 12:18:43.818653       1 serving.go:348] Generated self-signed cert in-memory
	I0318 12:44:43.792568    5712 command_runner.go:130] ! I0318 12:18:45.050029       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0318 12:44:43.792568    5712 command_runner.go:130] ! I0318 12:18:45.050365       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:43.792568    5712 command_runner.go:130] ! I0318 12:18:45.053707       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0318 12:44:43.792568    5712 command_runner.go:130] ! I0318 12:18:45.056733       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 12:44:43.792568    5712 command_runner.go:130] ! I0318 12:18:45.057073       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 12:44:43.792568    5712 command_runner.go:130] ! I0318 12:18:45.057232       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 12:44:43.792568    5712 command_runner.go:130] ! I0318 12:18:49.569825       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0318 12:44:43.792568    5712 command_runner.go:130] ! I0318 12:18:49.602388       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0318 12:44:43.792568    5712 command_runner.go:130] ! I0318 12:18:49.603663       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.603680       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.621364       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.621624       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.621432       1 controllermanager.go:642] "Started controller" controller="garbage-collector-controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.622281       1 graph_builder.go:294] "Running" component="GraphBuilder"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.644362       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.644758       1 stateful_set.go:161] "Starting stateful set controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.646607       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.660400       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.661053       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.670023       1 shared_informer.go:318] Caches are synced for tokens
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.679784       1 controllermanager.go:642] "Started controller" controller="persistentvolume-expander-controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.680015       1 expand_controller.go:328] "Starting expand controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.680028       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.692925       1 controllermanager.go:642] "Started controller" controller="clusterrole-aggregation-controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.693164       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.693449       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.727464       1 controllermanager.go:642] "Started controller" controller="serviceaccount-controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.727835       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.727848       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.742409       1 controllermanager.go:642] "Started controller" controller="disruption-controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.743029       1 disruption.go:433] "Sending events to api server."
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.743301       1 disruption.go:444] "Starting disruption controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.743449       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.759716       1 controllermanager.go:642] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.760338       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.760376       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.829809       1 controllermanager.go:642] "Started controller" controller="endpointslice-mirroring-controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.830343       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.830415       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:50.085725       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:50.086016       1 namespace_controller.go:197] "Starting namespace controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:50.086167       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:50.234974       1 controllermanager.go:642] "Started controller" controller="replicaset-controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:50.242121       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:50.242138       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:50.384031       1 controllermanager.go:642] "Started controller" controller="token-cleaner-controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:50.384090       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:50.384100       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:50.384108       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:50.530182       1 controllermanager.go:642] "Started controller" controller="persistentvolume-protection-controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:50.530258       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:50.530267       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:50.695232       1 controllermanager.go:642] "Started controller" controller="job-controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:50.695351       1 job_controller.go:226] "Starting job controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:50.695361       1 shared_informer.go:311] Waiting for caches to sync for job
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:50.833418       1 controllermanager.go:642] "Started controller" controller="deployment-controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:50.833674       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:50.833686       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:50.998838       1 controllermanager.go:642] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:50.999193       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:50.999227       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:51.141445       1 controllermanager.go:642] "Started controller" controller="ttl-after-finished-controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:51.141508       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:51.141518       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:51.279642       1 controllermanager.go:642] "Started controller" controller="pod-garbage-collector-controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:51.279728       1 gc_controller.go:101] "Starting GC controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:51.279742       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:51.429394       1 controllermanager.go:642] "Started controller" controller="cronjob-controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:51.429600       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:51.429612       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:19:01.598915       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:19:01.598966       1 controllermanager.go:642] "Started controller" controller="node-ipam-controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:19:01.599163       1 node_ipam_controller.go:162] "Starting ipam controller"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.599174       1 shared_informer.go:311] Waiting for caches to sync for node
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.601488       1 node_lifecycle_controller.go:431] "Controller will reconcile labels"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.601803       1 controllermanager.go:642] "Started controller" controller="node-lifecycle-controller"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.601987       1 node_lifecycle_controller.go:465] "Sending events to api server"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.602013       1 node_lifecycle_controller.go:476] "Starting node controller"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.602019       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.623744       1 controllermanager.go:642] "Started controller" controller="persistentvolume-binder-controller"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.624435       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.624966       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.663430       1 controllermanager.go:642] "Started controller" controller="endpointslice-controller"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.663839       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.663858       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.710104       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.710384       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.710455       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.710487       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.710760       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.710795       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.710822       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.710886       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.710930       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.710986       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.711095       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.711137       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.711160       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.711179       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.711211       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.711237       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.711261       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.711286       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.711316       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.711339       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.711356       1 controllermanager.go:642] "Started controller" controller="resourcequota-controller"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.711486       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.711654       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.711784       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.715155       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.715586       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.715886       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.732340       1 controllermanager.go:642] "Started controller" controller="ttl-controller"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.732695       1 ttl_controller.go:124] "Starting TTL controller"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.732944       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.747011       1 controllermanager.go:642] "Started controller" controller="endpoints-controller"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.747361       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.747484       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0318 12:44:43.794564    5712 command_runner.go:130] ! E0318 12:19:01.771424       1 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.771527       1 controllermanager.go:620] "Warning: skipping controller" controller="service-lb-controller"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.771544       1 core.go:228] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.772072       1 controllermanager.go:620] "Warning: skipping controller" controller="node-route-controller"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! E0318 12:19:01.775461       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:01.775656       1 controllermanager.go:620] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:01.788795       1 controllermanager.go:642] "Started controller" controller="ephemeral-volume-controller"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:01.789335       1 controller.go:169] "Starting ephemeral volume controller"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:01.789368       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:01.809091       1 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:01.809368       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:01.809720       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:01.846190       1 controllermanager.go:642] "Started controller" controller="daemonset-controller"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:01.846779       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:01.846879       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.137994       1 controllermanager.go:642] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.138059       1 horizontal.go:200] "Starting HPA controller"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.138069       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.189502       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.189864       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.190041       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.191172       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.191256       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.191347       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.193057       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.193152       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.193246       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.194807       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.194851       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.195648       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.194886       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.345061       1 controllermanager.go:642] "Started controller" controller="bootstrap-signer-controller"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.347311       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.364524       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.380069       1 shared_informer.go:318] Caches are synced for expand
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.390503       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.391317       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.393201       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.402532       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.419971       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.421082       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600\" does not exist"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.427201       1 shared_informer.go:318] Caches are synced for persistent volume
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.427876       1 shared_informer.go:318] Caches are synced for service account
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.429003       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.429629       1 shared_informer.go:318] Caches are synced for cronjob
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.430311       1 shared_informer.go:318] Caches are synced for PV protection
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.432115       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.434603       1 shared_informer.go:318] Caches are synced for TTL
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.437362       1 shared_informer.go:318] Caches are synced for deployment
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.438306       1 shared_informer.go:318] Caches are synced for HPA
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.441785       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.442916       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.444302       1 shared_informer.go:318] Caches are synced for disruption
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.447137       1 shared_informer.go:318] Caches are synced for daemon sets
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.447694       1 shared_informer.go:318] Caches are synced for endpoint
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.452098       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.454023       1 shared_informer.go:318] Caches are synced for stateful set
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.461158       1 shared_informer.go:318] Caches are synced for crt configmap
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.464623       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.480847       1 shared_informer.go:318] Caches are synced for GC
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.487772       1 shared_informer.go:318] Caches are synced for namespace
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.490082       1 shared_informer.go:318] Caches are synced for ephemeral
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.494160       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.499312       1 shared_informer.go:318] Caches are synced for node
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.499587       1 range_allocator.go:174] "Sending events to api server"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.499772       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.500365       1 shared_informer.go:318] Caches are synced for attach detach
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.500954       1 shared_informer.go:318] Caches are synced for job
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.501438       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0318 12:44:43.796565    5712 command_runner.go:130] ! I0318 12:19:02.501724       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0318 12:44:43.796565    5712 command_runner.go:130] ! I0318 12:19:02.503931       1 shared_informer.go:318] Caches are synced for PVC protection
	I0318 12:44:43.796565    5712 command_runner.go:130] ! I0318 12:19:02.509883       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0318 12:44:43.796565    5712 command_runner.go:130] ! I0318 12:19:02.528934       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-642600" podCIDRs=["10.244.0.0/24"]
	I0318 12:44:43.796565    5712 command_runner.go:130] ! I0318 12:19:02.565942       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 12:44:43.796565    5712 command_runner.go:130] ! I0318 12:19:02.603468       1 shared_informer.go:318] Caches are synced for taint
	I0318 12:44:43.796565    5712 command_runner.go:130] ! I0318 12:19:02.603627       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0318 12:44:43.796565    5712 command_runner.go:130] ! I0318 12:19:02.603721       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-642600"
	I0318 12:44:43.796565    5712 command_runner.go:130] ! I0318 12:19:02.603760       1 node_lifecycle_controller.go:1029] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0318 12:44:43.796565    5712 command_runner.go:130] ! I0318 12:19:02.603782       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0318 12:44:43.796565    5712 command_runner.go:130] ! I0318 12:19:02.603821       1 taint_manager.go:210] "Sending events to api server"
	I0318 12:44:43.796565    5712 command_runner.go:130] ! I0318 12:19:02.605481       1 event.go:307] "Event occurred" object="multinode-642600" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600 event: Registered Node multinode-642600 in Controller"
	I0318 12:44:43.796565    5712 command_runner.go:130] ! I0318 12:19:02.613688       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 12:44:43.796565    5712 command_runner.go:130] ! I0318 12:19:02.644197       1 event.go:307] "Event occurred" object="kube-system/etcd-multinode-642600" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:02.675188       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-multinode-642600" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:02.675510       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-multinode-642600" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:02.681286       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-multinode-642600" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:03.023915       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:03.023946       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:03.029139       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:03.075135       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:03.175071       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-kpt4f"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:03.181384       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-4dg79"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:03.624405       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-fgn7v"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:03.691902       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-xkgdt"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:03.810454       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="734.97569ms"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:03.847906       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="37.087083ms"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:03.945758       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="97.729709ms"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:03.945958       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="107.501µs"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:04.640409       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:04.732241       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-xkgdt"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:04.763359       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="121.567183ms"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:04.828298       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="64.870031ms"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:04.890459       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.083804ms"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:04.890764       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.4µs"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:15.938090       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="157.9µs"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:15.982953       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="121.301µs"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:17.607464       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:19.208242       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="102.7µs"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:19.274086       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="22.124146ms"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:19.275145       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="211.9µs"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:22:12.652722       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600-m02\" does not exist"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:22:12.679760       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-642600-m02" podCIDRs=["10.244.1.0/24"]
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:22:12.706735       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-d5llj"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:22:12.706774       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-vts9f"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:22:17.642129       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-642600-m02"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:22:17.642212       1 event.go:307] "Event occurred" object="multinode-642600-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600-m02 event: Registered Node multinode-642600-m02 in Controller"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:22:34.263318       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:23:01.851486       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5b5d89c9d6 to 2"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:23:01.881281       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-hmhdf"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:23:01.924301       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-48qkw"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:23:01.946058       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="91.676064ms"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:23:02.049702       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="103.251772ms"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:23:02.049789       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="35.4µs"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:23:04.783277       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="15.030749ms"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:23:04.783520       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="39.9µs"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:23:05.441638       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="14.350047ms"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:23:05.441876       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="105µs"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:27:09.073772       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600-m03\" does not exist"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:27:09.075345       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:27:09.095707       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-642600-m03" podCIDRs=["10.244.2.0/24"]
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:27:09.110695       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-khbjt"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:27:09.110730       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-thkjp"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:27:12.715112       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-642600-m03"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:27:12.715611       1 event.go:307] "Event occurred" object="multinode-642600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600-m03 event: Registered Node multinode-642600-m03 in Controller"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:27:30.856729       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:35:52.853028       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:35:52.854041       1 event.go:307] "Event occurred" object="multinode-642600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-642600-m03 status is now: NodeNotReady"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:35:52.871920       1 event.go:307] "Event occurred" object="kube-system/kindnet-thkjp" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:35:52.891158       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-khbjt" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:38:40.101072       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:38:42.930337       1 event.go:307] "Event occurred" object="multinode-642600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-642600-m03 event: Removing Node multinode-642600-m03 from Controller"
	I0318 12:44:43.799601    5712 command_runner.go:130] ! I0318 12:38:46.825246       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:43.799601    5712 command_runner.go:130] ! I0318 12:38:46.827225       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600-m03\" does not exist"
	I0318 12:44:43.799601    5712 command_runner.go:130] ! I0318 12:38:46.865011       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-642600-m03" podCIDRs=["10.244.3.0/24"]
	I0318 12:44:43.799601    5712 command_runner.go:130] ! I0318 12:38:47.931681       1 event.go:307] "Event occurred" object="multinode-642600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600-m03 event: Registered Node multinode-642600-m03 in Controller"
	I0318 12:44:43.799601    5712 command_runner.go:130] ! I0318 12:38:52.975724       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:43.799601    5712 command_runner.go:130] ! I0318 12:40:33.280094       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:43.799601    5712 command_runner.go:130] ! I0318 12:40:33.281180       1 event.go:307] "Event occurred" object="multinode-642600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-642600-m03 status is now: NodeNotReady"
	I0318 12:44:43.799601    5712 command_runner.go:130] ! I0318 12:40:33.601041       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-khbjt" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:43.799601    5712 command_runner.go:130] ! I0318 12:40:33.698293       1 event.go:307] "Event occurred" object="kube-system/kindnet-thkjp" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:43.817560    5712 logs.go:123] Gathering logs for coredns [fcf17db92b35] ...
	I0318 12:44:43.817560    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcf17db92b35"
	I0318 12:44:43.846573    5712 command_runner.go:130] > .:53
	I0318 12:44:43.847044    5712 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 07d6393480c36cc6b464d3853a5e32028517fcba50e93adef34ce624ca099b3a1e269a86e99bf5086a15610de9e11b2980c233f8d3dcbff38f702488f0fd5328
	I0318 12:44:43.847110    5712 command_runner.go:130] > CoreDNS-1.10.1
	I0318 12:44:43.847110    5712 command_runner.go:130] > linux/amd64, go1.20, 055b2c3
	I0318 12:44:43.847110    5712 command_runner.go:130] > [INFO] 127.0.0.1:53681 - 55845 "HINFO IN 162544917519141994.8165783507281513505. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.028223444s
	I0318 12:44:43.849133    5712 logs.go:123] Gathering logs for kindnet [5cf42651cb21] ...
	I0318 12:44:43.849212    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf42651cb21"
	I0318 12:44:43.882471    5712 command_runner.go:130] ! I0318 12:29:43.278241       1 main.go:227] handling current node
	I0318 12:44:43.883103    5712 command_runner.go:130] ! I0318 12:29:43.278258       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.883103    5712 command_runner.go:130] ! I0318 12:29:43.278267       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.883217    5712 command_runner.go:130] ! I0318 12:29:43.279034       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.883217    5712 command_runner.go:130] ! I0318 12:29:43.279112       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.883217    5712 command_runner.go:130] ! I0318 12:29:53.290788       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.883217    5712 command_runner.go:130] ! I0318 12:29:53.290919       1 main.go:227] handling current node
	I0318 12:44:43.883217    5712 command_runner.go:130] ! I0318 12:29:53.290935       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.883299    5712 command_runner.go:130] ! I0318 12:29:53.290944       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.883299    5712 command_runner.go:130] ! I0318 12:29:53.291443       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.883299    5712 command_runner.go:130] ! I0318 12:29:53.291608       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.883299    5712 command_runner.go:130] ! I0318 12:30:03.307097       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.883299    5712 command_runner.go:130] ! I0318 12:30:03.307405       1 main.go:227] handling current node
	I0318 12:44:43.883387    5712 command_runner.go:130] ! I0318 12:30:03.307624       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.883387    5712 command_runner.go:130] ! I0318 12:30:03.307713       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.883387    5712 command_runner.go:130] ! I0318 12:30:03.307989       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.883387    5712 command_runner.go:130] ! I0318 12:30:03.308095       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.883387    5712 command_runner.go:130] ! I0318 12:30:13.315412       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.883493    5712 command_runner.go:130] ! I0318 12:30:13.315512       1 main.go:227] handling current node
	I0318 12:44:43.883512    5712 command_runner.go:130] ! I0318 12:30:13.315528       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.883512    5712 command_runner.go:130] ! I0318 12:30:13.315537       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.883512    5712 command_runner.go:130] ! I0318 12:30:13.316187       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.883512    5712 command_runner.go:130] ! I0318 12:30:13.316277       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.883565    5712 command_runner.go:130] ! I0318 12:30:23.331223       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.883565    5712 command_runner.go:130] ! I0318 12:30:23.331328       1 main.go:227] handling current node
	I0318 12:44:43.883565    5712 command_runner.go:130] ! I0318 12:30:23.331344       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.883565    5712 command_runner.go:130] ! I0318 12:30:23.331352       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.883643    5712 command_runner.go:130] ! I0318 12:30:23.331895       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.883643    5712 command_runner.go:130] ! I0318 12:30:23.332071       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.883643    5712 command_runner.go:130] ! I0318 12:30:33.338821       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.883703    5712 command_runner.go:130] ! I0318 12:30:33.338848       1 main.go:227] handling current node
	I0318 12:44:43.883725    5712 command_runner.go:130] ! I0318 12:30:33.338860       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.883725    5712 command_runner.go:130] ! I0318 12:30:33.338866       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:30:33.339004       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:30:33.339017       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:30:43.354041       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:30:43.354126       1 main.go:227] handling current node
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:30:43.354142       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:30:43.354153       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:30:43.354280       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:30:43.354293       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:30:53.362056       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:30:53.362198       1 main.go:227] handling current node
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:30:53.362230       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:30:53.362239       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:30:53.362887       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:30:53.363194       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:31:03.378995       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:31:03.379039       1 main.go:227] handling current node
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:31:03.379096       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:31:03.379108       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:31:03.379432       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:31:03.379450       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:31:13.392082       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:31:13.392188       1 main.go:227] handling current node
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:31:13.392224       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:31:13.392249       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:31:13.392820       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:31:13.392974       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:31:23.402269       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:31:23.402391       1 main.go:227] handling current node
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:31:23.402408       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:31:23.402417       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:31:23.403188       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:31:23.403223       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:31:33.413396       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.884368    5712 command_runner.go:130] ! I0318 12:31:33.413577       1 main.go:227] handling current node
	I0318 12:44:43.884415    5712 command_runner.go:130] ! I0318 12:31:33.413639       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.884415    5712 command_runner.go:130] ! I0318 12:31:33.413654       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.884639    5712 command_runner.go:130] ! I0318 12:31:33.414293       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.884700    5712 command_runner.go:130] ! I0318 12:31:33.414437       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.884700    5712 command_runner.go:130] ! I0318 12:31:43.424274       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.884700    5712 command_runner.go:130] ! I0318 12:31:43.424320       1 main.go:227] handling current node
	I0318 12:44:43.884700    5712 command_runner.go:130] ! I0318 12:31:43.424332       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.884700    5712 command_runner.go:130] ! I0318 12:31:43.424339       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.884767    5712 command_runner.go:130] ! I0318 12:31:43.424591       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:31:43.424608       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:31:53.433473       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:31:53.433591       1 main.go:227] handling current node
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:31:53.433607       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:31:53.433615       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:31:53.433851       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:31:53.433959       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:03.443363       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:03.443411       1 main.go:227] handling current node
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:03.443424       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:03.443450       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:03.444602       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:03.445390       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:13.460166       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:13.460215       1 main.go:227] handling current node
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:13.460229       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:13.460237       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:13.460679       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:13.460697       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:23.479958       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:23.480007       1 main.go:227] handling current node
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:23.480024       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:23.480032       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:23.480521       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:23.480578       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:33.491143       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:33.491190       1 main.go:227] handling current node
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:33.491204       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:33.491211       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:33.491340       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:33.491369       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:43.505355       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:43.505474       1 main.go:227] handling current node
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:43.505490       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:43.505498       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:43.505666       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:43.505696       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:53.513310       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:53.513340       1 main.go:227] handling current node
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:53.513350       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:53.513357       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:53.513783       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:32:53.513865       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:03.527897       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:03.528343       1 main.go:227] handling current node
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:03.528485       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:03.528785       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:03.529110       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:03.529205       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:13.538048       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:13.538183       1 main.go:227] handling current node
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:13.538222       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:13.538317       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:13.538750       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:13.538888       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:23.555771       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:23.555820       1 main.go:227] handling current node
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:23.555895       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:23.555905       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:23.556511       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:23.556780       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:33.566023       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:33.566190       1 main.go:227] handling current node
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:33.566208       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:33.566217       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:33.566931       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:33.567031       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:43.581332       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:43.581382       1 main.go:227] handling current node
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:43.581449       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:43.581482       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:43.582063       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:43.582166       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:53.588426       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:53.588602       1 main.go:227] handling current node
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:53.588619       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:53.588628       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:53.588919       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:53.588937       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:03.604902       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:03.605007       1 main.go:227] handling current node
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:03.605023       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:03.605032       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:03.605612       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:03.605696       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:13.618369       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:13.618488       1 main.go:227] handling current node
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:13.618585       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:13.618604       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:13.618738       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:13.618747       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:23.626772       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:23.626887       1 main.go:227] handling current node
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:23.626903       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:23.626911       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:23.627415       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:23.627448       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:33.644122       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:33.644215       1 main.go:227] handling current node
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:33.644233       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:33.644757       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:33.645128       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:33.645240       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:43.661684       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:34:43.661731       1 main.go:227] handling current node
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:34:43.661744       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:34:43.661751       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:34:43.662532       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:34:43.662645       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:34:53.676649       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:34:53.677242       1 main.go:227] handling current node
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:34:53.677518       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:34:53.677631       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:34:53.677873       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:34:53.677905       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:03.685328       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:03.685457       1 main.go:227] handling current node
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:03.685474       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:03.685483       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:03.685861       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:03.686001       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:13.702673       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:13.702782       1 main.go:227] handling current node
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:13.702801       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:13.703456       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:13.703827       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:13.703864       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:23.711167       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:23.711370       1 main.go:227] handling current node
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:23.711388       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:23.711398       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:23.712127       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:23.712222       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:33.724041       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:33.724810       1 main.go:227] handling current node
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:33.724973       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:33.725045       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:33.725458       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:33.725875       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:43.740216       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:43.740493       1 main.go:227] handling current node
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:43.740511       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:43.740520       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:43.741453       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:43.741584       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:53.748632       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:53.749163       1 main.go:227] handling current node
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:53.749285       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:53.749498       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:53.749815       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:53.749904       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:03.765208       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:03.765326       1 main.go:227] handling current node
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:03.765343       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:03.765351       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:03.765883       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:03.766028       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:13.775221       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:13.775396       1 main.go:227] handling current node
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:13.775430       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:13.775502       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:13.776058       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:13.776177       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:23.790073       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:23.790179       1 main.go:227] handling current node
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:23.790195       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:23.790207       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:23.790761       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:23.790798       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:33.800116       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:33.800240       1 main.go:227] handling current node
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:33.800256       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:33.800265       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:33.800837       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:36:33.800858       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:36:43.817961       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:36:43.818115       1 main.go:227] handling current node
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:36:43.818132       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:36:43.818146       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:36:43.818537       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:36:43.818661       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:36:53.827340       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:36:53.827385       1 main.go:227] handling current node
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:36:53.827398       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:36:53.827406       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:36:53.827787       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:36:53.827885       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:03.840761       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:03.840837       1 main.go:227] handling current node
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:03.840851       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:03.840859       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:03.841285       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:03.841319       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:13.848127       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:13.848174       1 main.go:227] handling current node
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:13.848188       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:13.848195       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:13.848630       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:13.848646       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:23.863745       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:23.863916       1 main.go:227] handling current node
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:23.863950       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:23.863996       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:23.864419       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:23.864510       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:33.876214       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:33.876331       1 main.go:227] handling current node
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:33.876347       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:33.876355       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:33.877021       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:33.877100       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:43.886399       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:43.886544       1 main.go:227] handling current node
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:43.886626       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:43.886636       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:43.886872       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:43.886890       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:53.903761       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:53.903845       1 main.go:227] handling current node
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:53.903871       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:53.903880       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:53.905033       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:53.905181       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:03.919532       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:03.919783       1 main.go:227] handling current node
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:03.919840       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:03.919894       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:03.920221       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:03.920390       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:13.927894       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:13.928004       1 main.go:227] handling current node
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:13.928022       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:13.928031       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:13.928232       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:13.928269       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:23.943692       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:23.943780       1 main.go:227] handling current node
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:23.943795       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:23.943804       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:23.944523       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:23.944596       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:33.952000       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:33.952098       1 main.go:227] handling current node
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:33.952114       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:33.952123       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:33.952466       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:33.952503       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:43.965979       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:43.966101       1 main.go:227] handling current node
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:43.966117       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:43.966125       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:53.989210       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:53.989308       1 main.go:227] handling current node
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:53.989322       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:53.989373       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:53.989864       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:53.989957       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:53.990028       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.25.157.200 Flags: [] Table: 0} 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:39:03.996429       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:39:03.996598       1 main.go:227] handling current node
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:39:03.996614       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:03.996623       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:03.996739       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:03.996753       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:14.008318       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:14.008384       1 main.go:227] handling current node
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:14.008398       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:14.008405       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:14.009080       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:14.009179       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:24.016154       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:24.016315       1 main.go:227] handling current node
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:24.016330       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:24.016338       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:24.016842       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:24.016875       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:34.029061       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:34.029159       1 main.go:227] handling current node
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:34.029175       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:34.029184       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:34.030103       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:34.030216       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:44.037921       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:44.037960       1 main.go:227] handling current node
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:44.037972       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:44.037981       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:44.038243       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:44.038318       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:54.057786       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:54.058021       1 main.go:227] handling current node
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:54.058100       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:54.058189       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:54.058376       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:54.058478       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:04.067119       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:04.067262       1 main.go:227] handling current node
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:04.067280       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:04.067289       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:04.067742       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:04.067846       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:14.082426       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:14.082921       1 main.go:227] handling current node
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:14.082946       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:14.082956       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:14.083174       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:14.083247       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:24.098060       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:24.098161       1 main.go:227] handling current node
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:24.098178       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:24.098187       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:24.098316       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:24.098324       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:34.335103       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:34.335169       1 main.go:227] handling current node
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:34.335185       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:34.335192       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:34.335470       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:34.335488       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:44.342962       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:44.343122       1 main.go:227] handling current node
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:44.343139       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:44.343148       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:44.343738       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:44.343780       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:43.908429    5712 logs.go:123] Gathering logs for container status ...
	I0318 12:44:43.908429    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 12:44:44.016071    5712 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0318 12:44:44.016071    5712 command_runner.go:130] > 566e40ce923f7       8c811b4aec35f                                                                                         8 seconds ago        Running             busybox                   1                   e1b2432b0ed66       busybox-5b5d89c9d6-48qkw
	I0318 12:44:44.016071    5712 command_runner.go:130] > fcf17db92b351       ead0a4a53df89                                                                                         9 seconds ago        Running             coredns                   1                   1090dd5740980       coredns-5dd5756b68-fgn7v
	I0318 12:44:44.016071    5712 command_runner.go:130] > 4652c26c0904e       6e38f40d628db                                                                                         27 seconds ago       Running             storage-provisioner       2                   889c16eb0ab73       storage-provisioner
	I0318 12:44:44.016071    5712 command_runner.go:130] > 9fec05a61d2a9       4950bb10b3f87                                                                                         About a minute ago   Running             kindnet-cni               1                   5ecbdcbdad3fa       kindnet-kpt4f
	I0318 12:44:44.016071    5712 command_runner.go:130] > 787ade2ea2cd0       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   889c16eb0ab73       storage-provisioner
	I0318 12:44:44.016071    5712 command_runner.go:130] > 575b41a3a85a4       83f6cc407eed8                                                                                         About a minute ago   Running             kube-proxy                1                   7a2f0ccaf5c4c       kube-proxy-4dg79
	I0318 12:44:44.016071    5712 command_runner.go:130] > a48a6d310b868       7fe0e6f37db33                                                                                         About a minute ago   Running             kube-apiserver            0                   a7281d6e698ea       kube-apiserver-multinode-642600
	I0318 12:44:44.016071    5712 command_runner.go:130] > 14ae9398d33b1       d058aa5ab969c                                                                                         About a minute ago   Running             kube-controller-manager   1                   eca6768355c74       kube-controller-manager-multinode-642600
	I0318 12:44:44.016071    5712 command_runner.go:130] > bd1e4f4d262e3       e3db313c6dbc0                                                                                         About a minute ago   Running             kube-scheduler            1                   f62197122538f       kube-scheduler-multinode-642600
	I0318 12:44:44.016071    5712 command_runner.go:130] > 8e7911b58c587       73deb9a3f7025                                                                                         About a minute ago   Running             etcd                      0                   67004ee038ee4       etcd-multinode-642600
	I0318 12:44:44.016071    5712 command_runner.go:130] > a8dd2eacb7251       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   21 minutes ago       Exited              busybox                   0                   29bb4d534c2e2       busybox-5b5d89c9d6-48qkw
	I0318 12:44:44.016071    5712 command_runner.go:130] > e81f1d2fdb360       ead0a4a53df89                                                                                         25 minutes ago       Exited              coredns                   0                   ed38da653fbef       coredns-5dd5756b68-fgn7v
	I0318 12:44:44.016071    5712 command_runner.go:130] > 5cf42651cb21d       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              25 minutes ago       Exited              kindnet-cni               0                   fef37141be6db       kindnet-kpt4f
	I0318 12:44:44.016071    5712 command_runner.go:130] > 4bbad08fe59ac       83f6cc407eed8                                                                                         25 minutes ago       Exited              kube-proxy                0                   2f4709a3a45a4       kube-proxy-4dg79
	I0318 12:44:44.016071    5712 command_runner.go:130] > a54be44369019       d058aa5ab969c                                                                                         26 minutes ago       Exited              kube-controller-manager   0                   d766c4514f0bf       kube-controller-manager-multinode-642600
	I0318 12:44:44.016607    5712 command_runner.go:130] > 47777d4c0b90d       e3db313c6dbc0                                                                                         26 minutes ago       Exited              kube-scheduler            0                   3500a9f1ca84e       kube-scheduler-multinode-642600
	I0318 12:44:44.018691    5712 logs.go:123] Gathering logs for coredns [e81f1d2fdb36] ...
	I0318 12:44:44.018691    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e81f1d2fdb36"
	I0318 12:44:44.054690    5712 command_runner.go:130] > .:53
	I0318 12:44:44.055567    5712 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 07d6393480c36cc6b464d3853a5e32028517fcba50e93adef34ce624ca099b3a1e269a86e99bf5086a15610de9e11b2980c233f8d3dcbff38f702488f0fd5328
	I0318 12:44:44.055567    5712 command_runner.go:130] > CoreDNS-1.10.1
	I0318 12:44:44.055567    5712 command_runner.go:130] > linux/amd64, go1.20, 055b2c3
	I0318 12:44:44.055697    5712 command_runner.go:130] > [INFO] 127.0.0.1:48183 - 41539 "HINFO IN 767578685007701398.8900982300391989616. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.040167772s
	I0318 12:44:44.055761    5712 command_runner.go:130] > [INFO] 10.244.0.3:56190 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000320901s
	I0318 12:44:44.055820    5712 command_runner.go:130] > [INFO] 10.244.0.3:43050 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.04023503s
	I0318 12:44:44.055820    5712 command_runner.go:130] > [INFO] 10.244.0.3:47302 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.158419612s
	I0318 12:44:44.055820    5712 command_runner.go:130] > [INFO] 10.244.0.3:37199 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.162590352s
	I0318 12:44:44.055820    5712 command_runner.go:130] > [INFO] 10.244.1.2:48003 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000216101s
	I0318 12:44:44.055868    5712 command_runner.go:130] > [INFO] 10.244.1.2:48857 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000380201s
	I0318 12:44:44.055868    5712 command_runner.go:130] > [INFO] 10.244.1.2:52412 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000070401s
	I0318 12:44:44.055909    5712 command_runner.go:130] > [INFO] 10.244.1.2:59362 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000071801s
	I0318 12:44:44.055909    5712 command_runner.go:130] > [INFO] 10.244.0.3:38833 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000250501s
	I0318 12:44:44.055961    5712 command_runner.go:130] > [INFO] 10.244.0.3:34860 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.064163607s
	I0318 12:44:44.056142    5712 command_runner.go:130] > [INFO] 10.244.0.3:45210 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000227601s
	I0318 12:44:44.056189    5712 command_runner.go:130] > [INFO] 10.244.0.3:32804 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001229s
	I0318 12:44:44.056189    5712 command_runner.go:130] > [INFO] 10.244.0.3:44904 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.01563145s
	I0318 12:44:44.056189    5712 command_runner.go:130] > [INFO] 10.244.0.3:34958 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002035s
	I0318 12:44:44.056189    5712 command_runner.go:130] > [INFO] 10.244.0.3:59094 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001507s
	I0318 12:44:44.056256    5712 command_runner.go:130] > [INFO] 10.244.0.3:39370 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000181001s
	I0318 12:44:44.056256    5712 command_runner.go:130] > [INFO] 10.244.1.2:40318 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000302101s
	I0318 12:44:44.056300    5712 command_runner.go:130] > [INFO] 10.244.1.2:43523 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001489s
	I0318 12:44:44.056300    5712 command_runner.go:130] > [INFO] 10.244.1.2:47882 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001346s
	I0318 12:44:44.056300    5712 command_runner.go:130] > [INFO] 10.244.1.2:38222 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000057401s
	I0318 12:44:44.056300    5712 command_runner.go:130] > [INFO] 10.244.1.2:49068 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001253s
	I0318 12:44:44.056300    5712 command_runner.go:130] > [INFO] 10.244.1.2:35375 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000582s
	I0318 12:44:44.056390    5712 command_runner.go:130] > [INFO] 10.244.1.2:40933 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000179201s
	I0318 12:44:44.056390    5712 command_runner.go:130] > [INFO] 10.244.1.2:36014 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0002051s
	I0318 12:44:44.056390    5712 command_runner.go:130] > [INFO] 10.244.0.3:37733 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000265401s
	I0318 12:44:44.056390    5712 command_runner.go:130] > [INFO] 10.244.0.3:52912 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148001s
	I0318 12:44:44.056390    5712 command_runner.go:130] > [INFO] 10.244.0.3:33147 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000143701s
	I0318 12:44:44.056452    5712 command_runner.go:130] > [INFO] 10.244.0.3:49893 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000536s
	I0318 12:44:44.056452    5712 command_runner.go:130] > [INFO] 10.244.1.2:42681 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001221s
	I0318 12:44:44.056452    5712 command_runner.go:130] > [INFO] 10.244.1.2:41416 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143s
	I0318 12:44:44.056452    5712 command_runner.go:130] > [INFO] 10.244.1.2:58254 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000241501s
	I0318 12:44:44.056452    5712 command_runner.go:130] > [INFO] 10.244.1.2:35844 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000197201s
	I0318 12:44:44.056452    5712 command_runner.go:130] > [INFO] 10.244.0.3:33559 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102201s
	I0318 12:44:44.056452    5712 command_runner.go:130] > [INFO] 10.244.0.3:53963 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000158701s
	I0318 12:44:44.056452    5712 command_runner.go:130] > [INFO] 10.244.0.3:41406 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001297s
	I0318 12:44:44.056452    5712 command_runner.go:130] > [INFO] 10.244.0.3:34685 - 5 "PTR IN 1.144.25.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000264001s
	I0318 12:44:44.056452    5712 command_runner.go:130] > [INFO] 10.244.1.2:43312 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001178s
	I0318 12:44:44.056452    5712 command_runner.go:130] > [INFO] 10.244.1.2:55281 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000235501s
	I0318 12:44:44.056452    5712 command_runner.go:130] > [INFO] 10.244.1.2:34710 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000874s
	I0318 12:44:44.056452    5712 command_runner.go:130] > [INFO] 10.244.1.2:57686 - 5 "PTR IN 1.144.25.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000557s
	I0318 12:44:44.056452    5712 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0318 12:44:44.056452    5712 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0318 12:44:44.059249    5712 logs.go:123] Gathering logs for kube-scheduler [bd1e4f4d262e] ...
	I0318 12:44:44.059249    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd1e4f4d262e"
	I0318 12:44:44.096040    5712 command_runner.go:130] ! I0318 12:43:27.649061       1 serving.go:348] Generated self-signed cert in-memory
	I0318 12:44:44.097030    5712 command_runner.go:130] ! W0318 12:43:30.548831       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0318 12:44:44.097030    5712 command_runner.go:130] ! W0318 12:43:30.549092       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 12:44:44.097030    5712 command_runner.go:130] ! W0318 12:43:30.549282       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0318 12:44:44.097030    5712 command_runner.go:130] ! W0318 12:43:30.549461       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 12:44:44.097030    5712 command_runner.go:130] ! I0318 12:43:30.613305       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0318 12:44:44.097030    5712 command_runner.go:130] ! I0318 12:43:30.613417       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:44.097030    5712 command_runner.go:130] ! I0318 12:43:30.618512       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 12:44:44.097030    5712 command_runner.go:130] ! I0318 12:43:30.619171       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0318 12:44:44.097030    5712 command_runner.go:130] ! I0318 12:43:30.619276       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 12:44:44.097030    5712 command_runner.go:130] ! I0318 12:43:30.620071       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 12:44:44.097030    5712 command_runner.go:130] ! I0318 12:43:30.720411       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 12:44:44.099046    5712 logs.go:123] Gathering logs for kube-proxy [575b41a3a85a] ...
	I0318 12:44:44.099046    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 575b41a3a85a"
	I0318 12:44:44.129043    5712 command_runner.go:130] ! I0318 12:43:33.336778       1 server_others.go:69] "Using iptables proxy"
	I0318 12:44:44.129043    5712 command_runner.go:130] ! I0318 12:43:33.550433       1 node.go:141] Successfully retrieved node IP: 172.25.148.129
	I0318 12:44:44.129043    5712 command_runner.go:130] ! I0318 12:43:33.793084       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 12:44:44.129043    5712 command_runner.go:130] ! I0318 12:43:33.793109       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 12:44:44.129043    5712 command_runner.go:130] ! I0318 12:43:33.796954       1 server_others.go:152] "Using iptables Proxier"
	I0318 12:44:44.129043    5712 command_runner.go:130] ! I0318 12:43:33.798936       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 12:44:44.129043    5712 command_runner.go:130] ! I0318 12:43:33.800347       1 server.go:846] "Version info" version="v1.28.4"
	I0318 12:44:44.129043    5712 command_runner.go:130] ! I0318 12:43:33.800569       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:44.129043    5712 command_runner.go:130] ! I0318 12:43:33.803648       1 config.go:188] "Starting service config controller"
	I0318 12:44:44.129043    5712 command_runner.go:130] ! I0318 12:43:33.805156       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 12:44:44.129043    5712 command_runner.go:130] ! I0318 12:43:33.805421       1 config.go:97] "Starting endpoint slice config controller"
	I0318 12:44:44.129043    5712 command_runner.go:130] ! I0318 12:43:33.805584       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 12:44:44.129043    5712 command_runner.go:130] ! I0318 12:43:33.808628       1 config.go:315] "Starting node config controller"
	I0318 12:44:44.129043    5712 command_runner.go:130] ! I0318 12:43:33.808736       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 12:44:44.129043    5712 command_runner.go:130] ! I0318 12:43:33.905580       1 shared_informer.go:318] Caches are synced for service config
	I0318 12:44:44.129043    5712 command_runner.go:130] ! I0318 12:43:33.907041       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 12:44:44.129043    5712 command_runner.go:130] ! I0318 12:43:33.909416       1 shared_informer.go:318] Caches are synced for node config
	I0318 12:44:44.131031    5712 logs.go:123] Gathering logs for kindnet [9fec05a61d2a] ...
	I0318 12:44:44.131031    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fec05a61d2a"
	I0318 12:44:44.166540    5712 command_runner.go:130] ! I0318 12:43:33.429181       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0318 12:44:44.166540    5712 command_runner.go:130] ! I0318 12:43:33.431032       1 main.go:107] hostIP = 172.25.148.129
	I0318 12:44:44.166540    5712 command_runner.go:130] ! podIP = 172.25.148.129
	I0318 12:44:44.166540    5712 command_runner.go:130] ! I0318 12:43:33.432708       1 main.go:116] setting mtu 1500 for CNI 
	I0318 12:44:44.166540    5712 command_runner.go:130] ! I0318 12:43:33.432750       1 main.go:146] kindnetd IP family: "ipv4"
	I0318 12:44:44.166540    5712 command_runner.go:130] ! I0318 12:43:33.432773       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0318 12:44:44.166540    5712 command_runner.go:130] ! I0318 12:44:03.855331       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0318 12:44:44.166540    5712 command_runner.go:130] ! I0318 12:44:03.906638       1 main.go:223] Handling node with IPs: map[172.25.148.129:{}]
	I0318 12:44:44.166540    5712 command_runner.go:130] ! I0318 12:44:03.906763       1 main.go:227] handling current node
	I0318 12:44:44.166540    5712 command_runner.go:130] ! I0318 12:44:03.907280       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:44.166540    5712 command_runner.go:130] ! I0318 12:44:03.907371       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:44.166540    5712 command_runner.go:130] ! I0318 12:44:03.907763       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.25.159.102 Flags: [] Table: 0} 
	I0318 12:44:44.166540    5712 command_runner.go:130] ! I0318 12:44:03.907983       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:44.166540    5712 command_runner.go:130] ! I0318 12:44:03.907999       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:44.166540    5712 command_runner.go:130] ! I0318 12:44:03.908063       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.25.157.200 Flags: [] Table: 0} 
	I0318 12:44:44.166540    5712 command_runner.go:130] ! I0318 12:44:13.926166       1 main.go:223] Handling node with IPs: map[172.25.148.129:{}]
	I0318 12:44:44.166540    5712 command_runner.go:130] ! I0318 12:44:13.926260       1 main.go:227] handling current node
	I0318 12:44:44.166540    5712 command_runner.go:130] ! I0318 12:44:13.926281       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:44.166540    5712 command_runner.go:130] ! I0318 12:44:13.926377       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:44.166540    5712 command_runner.go:130] ! I0318 12:44:13.927231       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:44.167228    5712 command_runner.go:130] ! I0318 12:44:13.927364       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:44.167228    5712 command_runner.go:130] ! I0318 12:44:23.943396       1 main.go:223] Handling node with IPs: map[172.25.148.129:{}]
	I0318 12:44:44.167228    5712 command_runner.go:130] ! I0318 12:44:23.943437       1 main.go:227] handling current node
	I0318 12:44:44.167228    5712 command_runner.go:130] ! I0318 12:44:23.943450       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:44.167346    5712 command_runner.go:130] ! I0318 12:44:23.943456       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:44.167346    5712 command_runner.go:130] ! I0318 12:44:23.943816       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:44.167346    5712 command_runner.go:130] ! I0318 12:44:23.943956       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:44.167346    5712 command_runner.go:130] ! I0318 12:44:33.951114       1 main.go:223] Handling node with IPs: map[172.25.148.129:{}]
	I0318 12:44:44.167414    5712 command_runner.go:130] ! I0318 12:44:33.951215       1 main.go:227] handling current node
	I0318 12:44:44.167442    5712 command_runner.go:130] ! I0318 12:44:33.951232       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:44.167499    5712 command_runner.go:130] ! I0318 12:44:33.951241       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:44.167499    5712 command_runner.go:130] ! I0318 12:44:33.951807       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:44.167499    5712 command_runner.go:130] ! I0318 12:44:33.951927       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:44.167499    5712 command_runner.go:130] ! I0318 12:44:43.968530       1 main.go:223] Handling node with IPs: map[172.25.148.129:{}]
	I0318 12:44:44.167499    5712 command_runner.go:130] ! I0318 12:44:43.968658       1 main.go:227] handling current node
	I0318 12:44:44.167499    5712 command_runner.go:130] ! I0318 12:44:43.968737       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:44.167499    5712 command_runner.go:130] ! I0318 12:44:43.968990       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:44.167499    5712 command_runner.go:130] ! I0318 12:44:43.969485       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:44.167499    5712 command_runner.go:130] ! I0318 12:44:43.969715       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:46.680472    5712 api_server.go:253] Checking apiserver healthz at https://172.25.148.129:8443/healthz ...
	I0318 12:44:46.689900    5712 api_server.go:279] https://172.25.148.129:8443/healthz returned 200:
	ok
	I0318 12:44:46.690852    5712 round_trippers.go:463] GET https://172.25.148.129:8443/version
	I0318 12:44:46.690868    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:46.690907    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:46.690907    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:46.692243    5712 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0318 12:44:46.692659    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:46.692659    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:46.692659    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:46.692659    5712 round_trippers.go:580]     Content-Length: 264
	I0318 12:44:46.692659    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:46 GMT
	I0318 12:44:46.692659    5712 round_trippers.go:580]     Audit-Id: d3ce1c29-ea17-462e-848b-e39441cce8c7
	I0318 12:44:46.692659    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:46.692659    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:46.692659    5712 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0318 12:44:46.692813    5712 api_server.go:141] control plane version: v1.28.4
	I0318 12:44:46.692813    5712 api_server.go:131] duration metric: took 3.8780055s to wait for apiserver health ...
	I0318 12:44:46.692813    5712 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 12:44:46.704826    5712 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 12:44:46.740735    5712 command_runner.go:130] > a48a6d310b86
	I0318 12:44:46.740800    5712 logs.go:276] 1 containers: [a48a6d310b86]
	I0318 12:44:46.751792    5712 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 12:44:46.779857    5712 command_runner.go:130] > 8e7911b58c58
	I0318 12:44:46.780083    5712 logs.go:276] 1 containers: [8e7911b58c58]
	I0318 12:44:46.791203    5712 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 12:44:46.818283    5712 command_runner.go:130] > fcf17db92b35
	I0318 12:44:46.819045    5712 command_runner.go:130] > e81f1d2fdb36
	I0318 12:44:46.820240    5712 logs.go:276] 2 containers: [fcf17db92b35 e81f1d2fdb36]
	I0318 12:44:46.830151    5712 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 12:44:46.873443    5712 command_runner.go:130] > bd1e4f4d262e
	I0318 12:44:46.874114    5712 command_runner.go:130] > 47777d4c0b90
	I0318 12:44:46.874396    5712 logs.go:276] 2 containers: [bd1e4f4d262e 47777d4c0b90]
	I0318 12:44:46.885002    5712 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 12:44:46.915972    5712 command_runner.go:130] > 575b41a3a85a
	I0318 12:44:46.916070    5712 command_runner.go:130] > 4bbad08fe59a
	I0318 12:44:46.916070    5712 logs.go:276] 2 containers: [575b41a3a85a 4bbad08fe59a]
	I0318 12:44:46.926910    5712 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 12:44:46.959730    5712 command_runner.go:130] > 14ae9398d33b
	I0318 12:44:46.959730    5712 command_runner.go:130] > a54be4436901
	I0318 12:44:46.959839    5712 logs.go:276] 2 containers: [14ae9398d33b a54be4436901]
	I0318 12:44:46.970281    5712 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 12:44:46.999590    5712 command_runner.go:130] > 9fec05a61d2a
	I0318 12:44:47.000570    5712 command_runner.go:130] > 5cf42651cb21
	I0318 12:44:47.000756    5712 logs.go:276] 2 containers: [9fec05a61d2a 5cf42651cb21]
	I0318 12:44:47.000809    5712 logs.go:123] Gathering logs for kindnet [5cf42651cb21] ...
	I0318 12:44:47.000870    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf42651cb21"
	I0318 12:44:47.033864    5712 command_runner.go:130] ! I0318 12:29:43.278241       1 main.go:227] handling current node
	I0318 12:44:47.033864    5712 command_runner.go:130] ! I0318 12:29:43.278258       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.033864    5712 command_runner.go:130] ! I0318 12:29:43.278267       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.033864    5712 command_runner.go:130] ! I0318 12:29:43.279034       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.033864    5712 command_runner.go:130] ! I0318 12:29:43.279112       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.034331    5712 command_runner.go:130] ! I0318 12:29:53.290788       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.034500    5712 command_runner.go:130] ! I0318 12:29:53.290919       1 main.go:227] handling current node
	I0318 12:44:47.034578    5712 command_runner.go:130] ! I0318 12:29:53.290935       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.034719    5712 command_runner.go:130] ! I0318 12:29:53.290944       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.035257    5712 command_runner.go:130] ! I0318 12:29:53.291443       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:29:53.291608       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:03.307097       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:03.307405       1 main.go:227] handling current node
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:03.307624       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:03.307713       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:03.307989       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:03.308095       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:13.315412       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:13.315512       1 main.go:227] handling current node
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:13.315528       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:13.315537       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:13.316187       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:13.316277       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:23.331223       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:23.331328       1 main.go:227] handling current node
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:23.331344       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:23.331352       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:23.331895       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:23.332071       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:33.338821       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:33.338848       1 main.go:227] handling current node
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:33.338860       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:33.338866       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:33.339004       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:33.339017       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:43.354041       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:43.354126       1 main.go:227] handling current node
	I0318 12:44:47.043545    5712 command_runner.go:130] ! I0318 12:30:43.354142       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.043545    5712 command_runner.go:130] ! I0318 12:30:43.354153       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.043545    5712 command_runner.go:130] ! I0318 12:30:43.354280       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.043545    5712 command_runner.go:130] ! I0318 12:30:43.354293       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.043545    5712 command_runner.go:130] ! I0318 12:30:53.362056       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.043545    5712 command_runner.go:130] ! I0318 12:30:53.362198       1 main.go:227] handling current node
	I0318 12:44:47.043616    5712 command_runner.go:130] ! I0318 12:30:53.362230       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.043616    5712 command_runner.go:130] ! I0318 12:30:53.362239       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.043616    5712 command_runner.go:130] ! I0318 12:30:53.362887       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.043616    5712 command_runner.go:130] ! I0318 12:30:53.363194       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.043616    5712 command_runner.go:130] ! I0318 12:31:03.378995       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.043692    5712 command_runner.go:130] ! I0318 12:31:03.379039       1 main.go:227] handling current node
	I0318 12:44:47.043692    5712 command_runner.go:130] ! I0318 12:31:03.379096       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.043752    5712 command_runner.go:130] ! I0318 12:31:03.379108       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.043826    5712 command_runner.go:130] ! I0318 12:31:03.379432       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:03.379450       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:13.392082       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:13.392188       1 main.go:227] handling current node
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:13.392224       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:13.392249       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:13.392820       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:13.392974       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:23.402269       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:23.402391       1 main.go:227] handling current node
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:23.402408       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:23.402417       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:23.403188       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:23.403223       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:33.413396       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:33.413577       1 main.go:227] handling current node
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:33.413639       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:33.413654       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:33.414293       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:33.414437       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:43.424274       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:43.424320       1 main.go:227] handling current node
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:43.424332       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:43.424339       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:43.424591       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:43.424608       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:53.433473       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:53.433591       1 main.go:227] handling current node
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:53.433607       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:53.433615       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.044489    5712 command_runner.go:130] ! I0318 12:31:53.433851       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.044489    5712 command_runner.go:130] ! I0318 12:31:53.433959       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.044489    5712 command_runner.go:130] ! I0318 12:32:03.443363       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.044489    5712 command_runner.go:130] ! I0318 12:32:03.443411       1 main.go:227] handling current node
	I0318 12:44:47.044489    5712 command_runner.go:130] ! I0318 12:32:03.443424       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.044586    5712 command_runner.go:130] ! I0318 12:32:03.443450       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.044586    5712 command_runner.go:130] ! I0318 12:32:03.444602       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.044586    5712 command_runner.go:130] ! I0318 12:32:03.445390       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.044586    5712 command_runner.go:130] ! I0318 12:32:13.460166       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.044586    5712 command_runner.go:130] ! I0318 12:32:13.460215       1 main.go:227] handling current node
	I0318 12:44:47.044586    5712 command_runner.go:130] ! I0318 12:32:13.460229       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.044586    5712 command_runner.go:130] ! I0318 12:32:13.460237       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.044687    5712 command_runner.go:130] ! I0318 12:32:13.460679       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.044743    5712 command_runner.go:130] ! I0318 12:32:13.460697       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.044743    5712 command_runner.go:130] ! I0318 12:32:23.479958       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.044743    5712 command_runner.go:130] ! I0318 12:32:23.480007       1 main.go:227] handling current node
	I0318 12:44:47.044743    5712 command_runner.go:130] ! I0318 12:32:23.480024       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.044743    5712 command_runner.go:130] ! I0318 12:32:23.480032       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.044811    5712 command_runner.go:130] ! I0318 12:32:23.480521       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.044811    5712 command_runner.go:130] ! I0318 12:32:23.480578       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.044811    5712 command_runner.go:130] ! I0318 12:32:33.491143       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.044866    5712 command_runner.go:130] ! I0318 12:32:33.491190       1 main.go:227] handling current node
	I0318 12:44:47.044866    5712 command_runner.go:130] ! I0318 12:32:33.491204       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.044866    5712 command_runner.go:130] ! I0318 12:32:33.491211       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.044866    5712 command_runner.go:130] ! I0318 12:32:33.491340       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.044928    5712 command_runner.go:130] ! I0318 12:32:33.491369       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.044928    5712 command_runner.go:130] ! I0318 12:32:43.505355       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.044928    5712 command_runner.go:130] ! I0318 12:32:43.505474       1 main.go:227] handling current node
	I0318 12:44:47.044982    5712 command_runner.go:130] ! I0318 12:32:43.505490       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.044982    5712 command_runner.go:130] ! I0318 12:32:43.505498       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.044982    5712 command_runner.go:130] ! I0318 12:32:43.505666       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.044982    5712 command_runner.go:130] ! I0318 12:32:43.505696       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.045038    5712 command_runner.go:130] ! I0318 12:32:53.513310       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.045038    5712 command_runner.go:130] ! I0318 12:32:53.513340       1 main.go:227] handling current node
	I0318 12:44:47.045038    5712 command_runner.go:130] ! I0318 12:32:53.513350       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.045038    5712 command_runner.go:130] ! I0318 12:32:53.513357       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.045038    5712 command_runner.go:130] ! I0318 12:32:53.513783       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.045118    5712 command_runner.go:130] ! I0318 12:32:53.513865       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.045118    5712 command_runner.go:130] ! I0318 12:33:03.527897       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.045118    5712 command_runner.go:130] ! I0318 12:33:03.528343       1 main.go:227] handling current node
	I0318 12:44:47.045176    5712 command_runner.go:130] ! I0318 12:33:03.528485       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.045176    5712 command_runner.go:130] ! I0318 12:33:03.528785       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.045176    5712 command_runner.go:130] ! I0318 12:33:03.529110       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.045232    5712 command_runner.go:130] ! I0318 12:33:03.529205       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.045232    5712 command_runner.go:130] ! I0318 12:33:13.538048       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.045232    5712 command_runner.go:130] ! I0318 12:33:13.538183       1 main.go:227] handling current node
	I0318 12:44:47.045232    5712 command_runner.go:130] ! I0318 12:33:13.538222       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.045232    5712 command_runner.go:130] ! I0318 12:33:13.538317       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.045232    5712 command_runner.go:130] ! I0318 12:33:13.538750       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.045289    5712 command_runner.go:130] ! I0318 12:33:13.538888       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.045289    5712 command_runner.go:130] ! I0318 12:33:23.555771       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.045289    5712 command_runner.go:130] ! I0318 12:33:23.555820       1 main.go:227] handling current node
	I0318 12:44:47.045289    5712 command_runner.go:130] ! I0318 12:33:23.555895       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.045289    5712 command_runner.go:130] ! I0318 12:33:23.555905       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.045289    5712 command_runner.go:130] ! I0318 12:33:23.556511       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.045515    5712 command_runner.go:130] ! I0318 12:33:23.556780       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.045515    5712 command_runner.go:130] ! I0318 12:33:33.566023       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.045574    5712 command_runner.go:130] ! I0318 12:33:33.566190       1 main.go:227] handling current node
	I0318 12:44:47.045574    5712 command_runner.go:130] ! I0318 12:33:33.566208       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.045574    5712 command_runner.go:130] ! I0318 12:33:33.566217       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.045574    5712 command_runner.go:130] ! I0318 12:33:33.566931       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.045631    5712 command_runner.go:130] ! I0318 12:33:33.567031       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.045631    5712 command_runner.go:130] ! I0318 12:33:43.581332       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.045631    5712 command_runner.go:130] ! I0318 12:33:43.581382       1 main.go:227] handling current node
	I0318 12:44:47.045689    5712 command_runner.go:130] ! I0318 12:33:43.581449       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.045689    5712 command_runner.go:130] ! I0318 12:33:43.581482       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.045689    5712 command_runner.go:130] ! I0318 12:33:43.582063       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:33:43.582166       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:33:53.588426       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:33:53.588602       1 main.go:227] handling current node
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:33:53.588619       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:33:53.588628       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:33:53.588919       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:33:53.588937       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:03.604902       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:03.605007       1 main.go:227] handling current node
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:03.605023       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:03.605032       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:03.605612       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:03.605696       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:13.618369       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:13.618488       1 main.go:227] handling current node
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:13.618585       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:13.618604       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:13.618738       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:13.618747       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:23.626772       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:23.626887       1 main.go:227] handling current node
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:23.626903       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:23.626911       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:23.627415       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:23.627448       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:33.644122       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:33.644215       1 main.go:227] handling current node
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:33.644233       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:33.644757       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:33.645128       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:33.645240       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:43.661684       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:43.661731       1 main.go:227] handling current node
	I0318 12:44:47.046297    5712 command_runner.go:130] ! I0318 12:34:43.661744       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.046648    5712 command_runner.go:130] ! I0318 12:34:43.661751       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.046648    5712 command_runner.go:130] ! I0318 12:34:43.662532       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.046648    5712 command_runner.go:130] ! I0318 12:34:43.662645       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.046957    5712 command_runner.go:130] ! I0318 12:34:53.676649       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:34:53.677242       1 main.go:227] handling current node
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:34:53.677518       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:34:53.677631       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:34:53.677873       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:34:53.677905       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:03.685328       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:03.685457       1 main.go:227] handling current node
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:03.685474       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:03.685483       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:03.685861       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:03.686001       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:13.702673       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:13.702782       1 main.go:227] handling current node
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:13.702801       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:13.703456       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:13.703827       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:13.703864       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:23.711167       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:23.711370       1 main.go:227] handling current node
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:23.711388       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:23.711398       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:23.712127       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:23.712222       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:33.724041       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:33.724810       1 main.go:227] handling current node
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:33.724973       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:33.725045       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:33.725458       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:33.725875       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:43.740216       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:43.740493       1 main.go:227] handling current node
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:43.740511       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:43.740520       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:43.741453       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:43.741584       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:53.748632       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:53.749163       1 main.go:227] handling current node
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:53.749285       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.047566    5712 command_runner.go:130] ! I0318 12:35:53.749498       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.047566    5712 command_runner.go:130] ! I0318 12:35:53.749815       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.047629    5712 command_runner.go:130] ! I0318 12:35:53.749904       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.047629    5712 command_runner.go:130] ! I0318 12:36:03.765208       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.047629    5712 command_runner.go:130] ! I0318 12:36:03.765326       1 main.go:227] handling current node
	I0318 12:44:47.047629    5712 command_runner.go:130] ! I0318 12:36:03.765343       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.047629    5712 command_runner.go:130] ! I0318 12:36:03.765351       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.047629    5712 command_runner.go:130] ! I0318 12:36:03.765883       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.047629    5712 command_runner.go:130] ! I0318 12:36:03.766028       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.047727    5712 command_runner.go:130] ! I0318 12:36:13.775221       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.047727    5712 command_runner.go:130] ! I0318 12:36:13.775396       1 main.go:227] handling current node
	I0318 12:44:47.047781    5712 command_runner.go:130] ! I0318 12:36:13.775430       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.047781    5712 command_runner.go:130] ! I0318 12:36:13.775502       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.047781    5712 command_runner.go:130] ! I0318 12:36:13.776058       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.047866    5712 command_runner.go:130] ! I0318 12:36:13.776177       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.047866    5712 command_runner.go:130] ! I0318 12:36:23.790073       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.047941    5712 command_runner.go:130] ! I0318 12:36:23.790179       1 main.go:227] handling current node
	I0318 12:44:47.047941    5712 command_runner.go:130] ! I0318 12:36:23.790195       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.047941    5712 command_runner.go:130] ! I0318 12:36:23.790207       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.047992    5712 command_runner.go:130] ! I0318 12:36:23.790761       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.047992    5712 command_runner.go:130] ! I0318 12:36:23.790798       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.047992    5712 command_runner.go:130] ! I0318 12:36:33.800116       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.048043    5712 command_runner.go:130] ! I0318 12:36:33.800240       1 main.go:227] handling current node
	I0318 12:44:47.048043    5712 command_runner.go:130] ! I0318 12:36:33.800256       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.048043    5712 command_runner.go:130] ! I0318 12:36:33.800265       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.048091    5712 command_runner.go:130] ! I0318 12:36:33.800837       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.048091    5712 command_runner.go:130] ! I0318 12:36:33.800858       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.048091    5712 command_runner.go:130] ! I0318 12:36:43.817961       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.048143    5712 command_runner.go:130] ! I0318 12:36:43.818115       1 main.go:227] handling current node
	I0318 12:44:47.048143    5712 command_runner.go:130] ! I0318 12:36:43.818132       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.048143    5712 command_runner.go:130] ! I0318 12:36:43.818146       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.048143    5712 command_runner.go:130] ! I0318 12:36:43.818537       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.048220    5712 command_runner.go:130] ! I0318 12:36:43.818661       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.048220    5712 command_runner.go:130] ! I0318 12:36:53.827340       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.048220    5712 command_runner.go:130] ! I0318 12:36:53.827385       1 main.go:227] handling current node
	I0318 12:44:47.048220    5712 command_runner.go:130] ! I0318 12:36:53.827398       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.048220    5712 command_runner.go:130] ! I0318 12:36:53.827406       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.048290    5712 command_runner.go:130] ! I0318 12:36:53.827787       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.048290    5712 command_runner.go:130] ! I0318 12:36:53.827885       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.048290    5712 command_runner.go:130] ! I0318 12:37:03.840761       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.048290    5712 command_runner.go:130] ! I0318 12:37:03.840837       1 main.go:227] handling current node
	I0318 12:44:47.048290    5712 command_runner.go:130] ! I0318 12:37:03.840851       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.048290    5712 command_runner.go:130] ! I0318 12:37:03.840859       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.048290    5712 command_runner.go:130] ! I0318 12:37:03.841285       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.048424    5712 command_runner.go:130] ! I0318 12:37:03.841319       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.048424    5712 command_runner.go:130] ! I0318 12:37:13.848127       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.048424    5712 command_runner.go:130] ! I0318 12:37:13.848174       1 main.go:227] handling current node
	I0318 12:44:47.048424    5712 command_runner.go:130] ! I0318 12:37:13.848188       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.048510    5712 command_runner.go:130] ! I0318 12:37:13.848195       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.048510    5712 command_runner.go:130] ! I0318 12:37:13.848630       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.048510    5712 command_runner.go:130] ! I0318 12:37:13.848646       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.048569    5712 command_runner.go:130] ! I0318 12:37:23.863745       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.048569    5712 command_runner.go:130] ! I0318 12:37:23.863916       1 main.go:227] handling current node
	I0318 12:44:47.048569    5712 command_runner.go:130] ! I0318 12:37:23.863950       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.048569    5712 command_runner.go:130] ! I0318 12:37:23.863996       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.048627    5712 command_runner.go:130] ! I0318 12:37:23.864419       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.048627    5712 command_runner.go:130] ! I0318 12:37:23.864510       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.048627    5712 command_runner.go:130] ! I0318 12:37:33.876214       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.048685    5712 command_runner.go:130] ! I0318 12:37:33.876331       1 main.go:227] handling current node
	I0318 12:44:47.048685    5712 command_runner.go:130] ! I0318 12:37:33.876347       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.048685    5712 command_runner.go:130] ! I0318 12:37:33.876355       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.048740    5712 command_runner.go:130] ! I0318 12:37:33.877021       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.048740    5712 command_runner.go:130] ! I0318 12:37:33.877100       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.048740    5712 command_runner.go:130] ! I0318 12:37:43.886399       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.048795    5712 command_runner.go:130] ! I0318 12:37:43.886544       1 main.go:227] handling current node
	I0318 12:44:47.048795    5712 command_runner.go:130] ! I0318 12:37:43.886626       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.048795    5712 command_runner.go:130] ! I0318 12:37:43.886636       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.048846    5712 command_runner.go:130] ! I0318 12:37:43.886872       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.048846    5712 command_runner.go:130] ! I0318 12:37:43.886890       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.048846    5712 command_runner.go:130] ! I0318 12:37:53.903761       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.048895    5712 command_runner.go:130] ! I0318 12:37:53.903845       1 main.go:227] handling current node
	I0318 12:44:47.048895    5712 command_runner.go:130] ! I0318 12:37:53.903871       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:37:53.903880       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:37:53.905033       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:37:53.905181       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:03.919532       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:03.919783       1 main.go:227] handling current node
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:03.919840       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:03.919894       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:03.920221       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:03.920390       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:13.927894       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:13.928004       1 main.go:227] handling current node
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:13.928022       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:13.928031       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:13.928232       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:13.928269       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:23.943692       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:23.943780       1 main.go:227] handling current node
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:23.943795       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:23.943804       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:23.944523       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:23.944596       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:33.952000       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:33.952098       1 main.go:227] handling current node
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:33.952114       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:33.952123       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:33.952466       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:33.952503       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:43.965979       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:43.966101       1 main.go:227] handling current node
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:43.966117       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:43.966125       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:53.989210       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:53.989308       1 main.go:227] handling current node
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:53.989322       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:53.989373       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:53.989864       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:53.989957       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:53.990028       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.25.157.200 Flags: [] Table: 0} 
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:39:03.996429       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:39:03.996598       1 main.go:227] handling current node
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:39:03.996614       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:39:03.996623       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:39:03.996739       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:47.049481    5712 command_runner.go:130] ! I0318 12:39:03.996753       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:47.049481    5712 command_runner.go:130] ! I0318 12:39:14.008318       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.049481    5712 command_runner.go:130] ! I0318 12:39:14.008384       1 main.go:227] handling current node
	I0318 12:44:47.049481    5712 command_runner.go:130] ! I0318 12:39:14.008398       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.049481    5712 command_runner.go:130] ! I0318 12:39:14.008405       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.049481    5712 command_runner.go:130] ! I0318 12:39:14.009080       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:47.049481    5712 command_runner.go:130] ! I0318 12:39:14.009179       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:47.049481    5712 command_runner.go:130] ! I0318 12:39:24.016154       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.049481    5712 command_runner.go:130] ! I0318 12:39:24.016315       1 main.go:227] handling current node
	I0318 12:44:47.049481    5712 command_runner.go:130] ! I0318 12:39:24.016330       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.049481    5712 command_runner.go:130] ! I0318 12:39:24.016338       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.049481    5712 command_runner.go:130] ! I0318 12:39:24.016842       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:47.049481    5712 command_runner.go:130] ! I0318 12:39:24.016875       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:47.049481    5712 command_runner.go:130] ! I0318 12:39:34.029061       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.049678    5712 command_runner.go:130] ! I0318 12:39:34.029159       1 main.go:227] handling current node
	I0318 12:44:47.049678    5712 command_runner.go:130] ! I0318 12:39:34.029175       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.049678    5712 command_runner.go:130] ! I0318 12:39:34.029184       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.049678    5712 command_runner.go:130] ! I0318 12:39:34.030103       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:39:34.030216       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:39:44.037921       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:39:44.037960       1 main.go:227] handling current node
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:39:44.037972       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:39:44.037981       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:39:44.038243       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:39:44.038318       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:39:54.057786       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:39:54.058021       1 main.go:227] handling current node
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:39:54.058100       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:39:54.058189       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:39:54.058376       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:39:54.058478       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:40:04.067119       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:40:04.067262       1 main.go:227] handling current node
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:40:04.067280       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:40:04.067289       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:40:04.067742       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:40:04.067846       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:40:14.082426       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:40:14.082921       1 main.go:227] handling current node
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:40:14.082946       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:40:14.082956       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:40:14.083174       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:40:14.083247       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:40:24.098060       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:40:24.098161       1 main.go:227] handling current node
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:40:24.098178       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:40:24.098187       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:40:24.098316       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:40:24.098324       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:40:34.335103       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:40:34.335169       1 main.go:227] handling current node
	I0318 12:44:47.050291    5712 command_runner.go:130] ! I0318 12:40:34.335185       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.050291    5712 command_runner.go:130] ! I0318 12:40:34.335192       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.050345    5712 command_runner.go:130] ! I0318 12:40:34.335470       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:47.050345    5712 command_runner.go:130] ! I0318 12:40:34.335488       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:47.050345    5712 command_runner.go:130] ! I0318 12:40:44.342962       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.050345    5712 command_runner.go:130] ! I0318 12:40:44.343122       1 main.go:227] handling current node
	I0318 12:44:47.050345    5712 command_runner.go:130] ! I0318 12:40:44.343139       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.050345    5712 command_runner.go:130] ! I0318 12:40:44.343148       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.050345    5712 command_runner.go:130] ! I0318 12:40:44.343738       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:47.050345    5712 command_runner.go:130] ! I0318 12:40:44.343780       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:47.069992    5712 logs.go:123] Gathering logs for Docker ...
	I0318 12:44:47.069992    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 12:44:47.105334    5712 command_runner.go:130] > Mar 18 12:41:52 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 12:44:47.105334    5712 command_runner.go:130] > Mar 18 12:41:52 minikube cri-dockerd[219]: time="2024-03-18T12:41:52Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 12:44:47.105482    5712 command_runner.go:130] > Mar 18 12:41:52 minikube cri-dockerd[219]: time="2024-03-18T12:41:52Z" level=info msg="Start docker client with request timeout 0s"
	I0318 12:44:47.105482    5712 command_runner.go:130] > Mar 18 12:41:52 minikube cri-dockerd[219]: time="2024-03-18T12:41:52Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0318 12:44:47.105482    5712 command_runner.go:130] > Mar 18 12:41:52 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0318 12:44:47.105482    5712 command_runner.go:130] > Mar 18 12:41:52 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 12:44:47.105482    5712 command_runner.go:130] > Mar 18 12:41:52 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 12:44:47.105624    5712 command_runner.go:130] > Mar 18 12:41:55 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0318 12:44:47.105624    5712 command_runner.go:130] > Mar 18 12:41:55 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0318 12:44:47.105624    5712 command_runner.go:130] > Mar 18 12:41:55 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 12:44:47.105624    5712 command_runner.go:130] > Mar 18 12:41:55 minikube cri-dockerd[404]: time="2024-03-18T12:41:55Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 12:44:47.105624    5712 command_runner.go:130] > Mar 18 12:41:55 minikube cri-dockerd[404]: time="2024-03-18T12:41:55Z" level=info msg="Start docker client with request timeout 0s"
	I0318 12:44:47.105741    5712 command_runner.go:130] > Mar 18 12:41:55 minikube cri-dockerd[404]: time="2024-03-18T12:41:55Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0318 12:44:47.105741    5712 command_runner.go:130] > Mar 18 12:41:55 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0318 12:44:47.105741    5712 command_runner.go:130] > Mar 18 12:41:55 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 12:44:47.105741    5712 command_runner.go:130] > Mar 18 12:41:55 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 12:44:47.105741    5712 command_runner.go:130] > Mar 18 12:41:57 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0318 12:44:47.105843    5712 command_runner.go:130] > Mar 18 12:41:57 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0318 12:44:47.105843    5712 command_runner.go:130] > Mar 18 12:41:57 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 12:44:47.105843    5712 command_runner.go:130] > Mar 18 12:41:57 minikube cri-dockerd[424]: time="2024-03-18T12:41:57Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 12:44:47.105843    5712 command_runner.go:130] > Mar 18 12:41:57 minikube cri-dockerd[424]: time="2024-03-18T12:41:57Z" level=info msg="Start docker client with request timeout 0s"
	I0318 12:44:47.105843    5712 command_runner.go:130] > Mar 18 12:41:57 minikube cri-dockerd[424]: time="2024-03-18T12:41:57Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0318 12:44:47.105941    5712 command_runner.go:130] > Mar 18 12:41:57 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0318 12:44:47.105941    5712 command_runner.go:130] > Mar 18 12:41:57 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 12:44:47.105941    5712 command_runner.go:130] > Mar 18 12:41:57 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 12:44:47.105941    5712 command_runner.go:130] > Mar 18 12:41:59 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0318 12:44:47.105941    5712 command_runner.go:130] > Mar 18 12:41:59 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0318 12:44:47.105941    5712 command_runner.go:130] > Mar 18 12:41:59 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0318 12:44:47.105941    5712 command_runner.go:130] > Mar 18 12:41:59 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 12:44:47.105941    5712 command_runner.go:130] > Mar 18 12:41:59 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 12:44:47.106038    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 systemd[1]: Starting Docker Application Container Engine...
	I0318 12:44:47.106038    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[652]: time="2024-03-18T12:42:46.799415676Z" level=info msg="Starting up"
	I0318 12:44:47.106038    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[652]: time="2024-03-18T12:42:46.800442474Z" level=info msg="containerd not running, starting managed containerd"
	I0318 12:44:47.106038    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[652]: time="2024-03-18T12:42:46.801655972Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=658
	I0318 12:44:47.106134    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.836542309Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0318 12:44:47.106134    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.866837154Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0318 12:44:47.106134    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.866991653Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0318 12:44:47.106134    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.867166153Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0318 12:44:47.106134    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.867346253Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:47.106259    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.868353051Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:47.106259    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.868455451Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:47.106259    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.868755450Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:47.106259    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.868785850Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:47.106384    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.868803850Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0318 12:44:47.106384    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.868815950Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:47.106384    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.869407649Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:47.106384    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.870171948Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:47.106384    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.873462742Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:47.106502    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.873569242Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:47.106502    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.873718241Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:47.106502    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.873818241Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0318 12:44:47.106595    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.874315040Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0318 12:44:47.106624    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.874434440Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0318 12:44:47.106624    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.874453940Z" level=info msg="metadata content store policy set" policy=shared
	I0318 12:44:47.106624    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880096930Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0318 12:44:47.106624    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880252829Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0318 12:44:47.106725    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880377329Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0318 12:44:47.106725    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880397729Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0318 12:44:47.106725    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880414329Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0318 12:44:47.106725    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880488329Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0318 12:44:47.106839    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880819128Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0318 12:44:47.106839    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880926428Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0318 12:44:47.106839    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881236528Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0318 12:44:47.106839    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881376427Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0318 12:44:47.106936    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881400527Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0318 12:44:47.106936    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881426127Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0318 12:44:47.106936    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881441527Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0318 12:44:47.106936    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881474927Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0318 12:44:47.107028    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881491327Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0318 12:44:47.107028    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881506427Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0318 12:44:47.107028    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881521027Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0318 12:44:47.107028    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881536227Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0318 12:44:47.107122    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881566927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.107122    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881586627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.107122    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881601327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.107122    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881617327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.107122    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881631227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.107122    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881646527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.107234    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881659427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.107234    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881673727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.107234    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881757827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.107234    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881783527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.107234    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881798027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.107351    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881812927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.107351    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881826827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.107351    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881844827Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0318 12:44:47.107351    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881868126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.107351    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881889326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.107537    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881902926Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0318 12:44:47.107537    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882002626Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0318 12:44:47.107537    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882117726Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0318 12:44:47.107537    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882162226Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0318 12:44:47.107672    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882178726Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0318 12:44:47.107672    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882242626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.107672    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882337926Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0318 12:44:47.107732    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882358926Z" level=info msg="NRI interface is disabled by configuration."
	I0318 12:44:47.107789    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882603625Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0318 12:44:47.107789    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882759725Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0318 12:44:47.107789    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.883033524Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0318 12:44:47.107868    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.883153424Z" level=info msg="containerd successfully booted in 0.049971s"
	I0318 12:44:47.107868    5712 command_runner.go:130] > Mar 18 12:42:47 multinode-642600 dockerd[652]: time="2024-03-18T12:42:47.858472851Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0318 12:44:47.107868    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 dockerd[652]: time="2024-03-18T12:42:48.057442718Z" level=info msg="Loading containers: start."
	I0318 12:44:47.107931    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 dockerd[652]: time="2024-03-18T12:42:48.544395210Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0318 12:44:47.107931    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 dockerd[652]: time="2024-03-18T12:42:48.632442528Z" level=info msg="Loading containers: done."
	I0318 12:44:47.107931    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 dockerd[652]: time="2024-03-18T12:42:48.662805631Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	I0318 12:44:47.107931    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 dockerd[652]: time="2024-03-18T12:42:48.663682128Z" level=info msg="Daemon has completed initialization"
	I0318 12:44:47.107931    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 dockerd[652]: time="2024-03-18T12:42:48.725498031Z" level=info msg="API listen on [::]:2376"
	I0318 12:44:47.108025    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 systemd[1]: Started Docker Application Container Engine.
	I0318 12:44:47.108025    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 dockerd[652]: time="2024-03-18T12:42:48.725911430Z" level=info msg="API listen on /var/run/docker.sock"
	I0318 12:44:47.108025    5712 command_runner.go:130] > Mar 18 12:43:15 multinode-642600 systemd[1]: Stopping Docker Application Container Engine...
	I0318 12:44:47.108085    5712 command_runner.go:130] > Mar 18 12:43:15 multinode-642600 dockerd[652]: time="2024-03-18T12:43:15.631434936Z" level=info msg="Processing signal 'terminated'"
	I0318 12:44:47.108085    5712 command_runner.go:130] > Mar 18 12:43:15 multinode-642600 dockerd[652]: time="2024-03-18T12:43:15.633587433Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0318 12:44:47.108085    5712 command_runner.go:130] > Mar 18 12:43:15 multinode-642600 dockerd[652]: time="2024-03-18T12:43:15.634258932Z" level=info msg="Daemon shutdown complete"
	I0318 12:44:47.108085    5712 command_runner.go:130] > Mar 18 12:43:15 multinode-642600 dockerd[652]: time="2024-03-18T12:43:15.634450831Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0318 12:44:47.108085    5712 command_runner.go:130] > Mar 18 12:43:15 multinode-642600 dockerd[652]: time="2024-03-18T12:43:15.634476831Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0318 12:44:47.108085    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 systemd[1]: docker.service: Deactivated successfully.
	I0318 12:44:47.108085    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 systemd[1]: Stopped Docker Application Container Engine.
	I0318 12:44:47.108085    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 systemd[1]: Starting Docker Application Container Engine...
	I0318 12:44:47.108237    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:16.717087499Z" level=info msg="Starting up"
	I0318 12:44:47.108237    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:16.718262797Z" level=info msg="containerd not running, starting managed containerd"
	I0318 12:44:47.108237    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:16.719705495Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1048
	I0318 12:44:47.108237    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.754738639Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0318 12:44:47.108237    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784193992Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0318 12:44:47.108237    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784236292Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0318 12:44:47.108237    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784275292Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0318 12:44:47.108237    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784291492Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:47.108376    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784317492Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:47.108376    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784331992Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:47.108376    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784550091Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:47.108376    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784651691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:47.108376    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784673391Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0318 12:44:47.108499    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784704091Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:47.108499    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784764391Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:47.108499    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784996290Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:47.108598    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.787641686Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:47.108598    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.787744286Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:47.108669    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.787950186Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:47.108727    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.788044886Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0318 12:44:47.108727    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.788091986Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0318 12:44:47.108727    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.788127185Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0318 12:44:47.108823    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.788138585Z" level=info msg="metadata content store policy set" policy=shared
	I0318 12:44:47.108875    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.789136284Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0318 12:44:47.108875    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.789269784Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0318 12:44:47.108875    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.789298984Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0318 12:44:47.108875    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.789320484Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.789342084Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.789644383Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.790600382Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.791760980Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.791832280Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.791851580Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.791866579Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.791880279Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.791969479Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.791989879Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792004479Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792018079Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792030379Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792042479Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792063279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792077879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792090579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792103979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792117779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792135679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792148379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792161279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792174179Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792188479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792199579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.109529    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792211479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.109529    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792223379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.109529    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792238079Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0318 12:44:47.109529    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792261579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.109529    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792276079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.109623    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792287879Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0318 12:44:47.109623    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792337479Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0318 12:44:47.109623    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792356479Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792368079Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792380379Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792530178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792576778Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792591078Z" level=info msg="NRI interface is disabled by configuration."
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792811378Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792927678Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.793108678Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.793160477Z" level=info msg="containerd successfully booted in 0.039931s"
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:17 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:17.767243919Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:17 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:17.800090666Z" level=info msg="Loading containers: start."
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:18 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:18.103803081Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:18 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:18.187726546Z" level=info msg="Loading containers: done."
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:18 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:18.216487100Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:18 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:18.216648600Z" level=info msg="Daemon has completed initialization"
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:18 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:18.271691012Z" level=info msg="API listen on /var/run/docker.sock"
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:18 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:18.271966711Z" level=info msg="API listen on [::]:2376"
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:18 multinode-642600 systemd[1]: Started Docker Application Container Engine.
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Start docker client with request timeout 0s"
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Loaded network plugin cni"
	I0318 12:44:47.110224    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0318 12:44:47.110344    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Docker Info: &{ID:aa9100d3-1595-41ce-b36f-06932aef3ecb Containers:18 ContainersRunning:0 ContainersPaused:0 ContainersStopped:18 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:26 OomKillDisable:false NGoroutines:53 SystemTime:2024-03-18T12:43:19.415553382Z LoggingDriver:json-file CgroupDriver:cgroupfs CgroupVersion:2 NEventsListener:0 Ke
rnelVersion:5.10.207 OperatingSystem:Buildroot 2023.02.9 OSVersion:2023.02.9 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0002da070 NCPU:2 MemTotal:2216210432 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:multinode-642600 Labels:[provider=hyperv] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dcf2847247e18caba8dce86522029642f60fe96b Expected:dcf2847247e18caba8dce86522029642f60fe96b} RuncCommit:{ID:51d5e94601ceffbbd85688df1c928ecccbfa4685 Expected:51d5e94601ceffbbd85688df1c928ecccbfa4685} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[nam
e=seccomp,profile=builtin name=cgroupns] ProductLicense:Community Engine DefaultAddressPools:[] Warnings:[]}"
	I0318 12:44:47.110344    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0318 12:44:47.110418    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0318 12:44:47.110418    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0318 12:44:47.110418    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Start cri-dockerd grpc backend"
	I0318 12:44:47.110484    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0318 12:44:47.110484    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:24Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-5dd5756b68-fgn7v_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"ed38da653fbefea9aeb0ebdb91f985394a7a792571704a4875018f5a6bc9abda\""
	I0318 12:44:47.110554    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:24Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-5b5d89c9d6-48qkw_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"29bb4d534c2e2b00dfe907d4443637851e3c3348e31bf00939cd6efad71c4e2e\""
	I0318 12:44:47.110554    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.316277241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:47.110554    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.317878239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:47.110554    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.318571937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.110554    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.319101537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.110554    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.356638277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:47.110554    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.356750476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:47.110554    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.356767376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.110554    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.357118676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.110554    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.418245378Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:47.110554    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.421018274Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:47.110554    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.421217073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.110554    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.422102972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.110554    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.428274662Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:47.110554    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.428365762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:47.110554    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.428455862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.110554    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.428580261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.110554    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/67004ee038ee4247f6f751987304426067a63cee8c1636408dd16efea728ba78/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:47.110554    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f62197122538f83943df8b19710794ea6ea9a9ffa884082a1a62435e9b152c3f/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:47.110554    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/eca6768355c74817c50b811b96b5fcc93a181c4968c53d4d4b0d0252ff6dbd0a/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:47.110554    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a7281d6e698ea2dc42d7d3093ccde32b770bf8367fdb58230694380f40daeb9f/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:47.110554    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.879224940Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:47.111092    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.879310840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:47.111092    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.879325040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.111092    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.879857239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.111092    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.050226267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:47.111092    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.051715465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:47.111231    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.056267457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.111231    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.056729856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.111333    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.064877643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:47.111372    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.065332743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:47.111372    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.065495042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.111372    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.065849742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.111519    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.091573301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:47.111519    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.091639201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:47.111519    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.091652401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.111519    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.091761800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.111519    5712 command_runner.go:130] > Mar 18 12:43:30 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:30Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0318 12:44:47.111624    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.923135971Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:47.111624    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.924017669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:47.111624    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.924165569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.111746    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.924385369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.111746    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.955673419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:47.111746    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.955753819Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:47.111746    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.955772119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.111855    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.956168818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.111855    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.964148405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:47.111913    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.964256705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:47.111913    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.964669604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.111964    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.964999404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.111964    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7a2f0ccaf5c4c6c0019124eda20c358dfa8aa20f0c92ade10aa3de32608e3527/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:47.111964    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/889c16eb0ab731956d02a28d0337dc6ff349dc574ba10d4fc1a939fb2e09d6d3/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:47.112058    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.391303322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:47.112058    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.391389722Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:47.112058    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.391408822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.112130    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.391535621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.112171    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.413113087Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:47.112210    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.413460286Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:47.112210    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.413726486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.112210    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.414492285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.112287    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5ecbdcbdad3fa79af8ef70896ae67d65b14c47b5811078c5d6d167e0f295d1bc/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:47.112287    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.850170088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:47.112353    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.850431387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:47.112353    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.850449987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.112405    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.850590387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.112405    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:03.011137468Z" level=info msg="shim disconnected" id=787ade2ea2cd019bbcde5523b750659f59b0187d9de4b757fba8a14f9a126460 namespace=moby
	I0318 12:44:47.112405    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:03.011334567Z" level=warning msg="cleaning up after shim disconnected" id=787ade2ea2cd019bbcde5523b750659f59b0187d9de4b757fba8a14f9a126460 namespace=moby
	I0318 12:44:47.112498    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:03.011364567Z" level=info msg="cleaning up dead shim" namespace=moby
	I0318 12:44:47.112498    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 dockerd[1042]: time="2024-03-18T12:44:03.012148165Z" level=info msg="ignoring event" container=787ade2ea2cd019bbcde5523b750659f59b0187d9de4b757fba8a14f9a126460 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0318 12:44:47.112534    5712 command_runner.go:130] > Mar 18 12:44:17 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:17.562340104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:47.112578    5712 command_runner.go:130] > Mar 18 12:44:17 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:17.562524303Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:47.112638    5712 command_runner.go:130] > Mar 18 12:44:17 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:17.562584503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.112638    5712 command_runner.go:130] > Mar 18 12:44:17 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:17.563253802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.112707    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.376262769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:47.112733    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.376780468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:47.112733    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.377021468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.112789    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.377223268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.112826    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:44:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1090dd57409807a15613607fd810b67863a9dd9c5a8512d7a6720906641c7f26/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:47.112826    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.684170919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:47.112826    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.684458920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:47.112890    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.684558520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.112932    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.685142822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.112979    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.901354745Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:47.113018    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.901518146Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:47.113018    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.901538746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.113018    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.901651446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.113018    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:44:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e1b2432b0ed66a1175586c13232eb9b9239f18a4f9a86e2a0c5f48c1407fdb14/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0318 12:44:47.113018    5712 command_runner.go:130] > Mar 18 12:44:36 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:36.227440411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:47.113018    5712 command_runner.go:130] > Mar 18 12:44:36 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:36.227939926Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:47.113018    5712 command_runner.go:130] > Mar 18 12:44:36 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:36.228081131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.113018    5712 command_runner.go:130] > Mar 18 12:44:36 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:36.228507343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.113018    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113018    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113018    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113018    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113018    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113018    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113018    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113018    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113018    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113018    5712 command_runner.go:130] > Mar 18 12:44:40 multinode-642600 dockerd[1042]: 2024/03/18 12:44:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113555    5712 command_runner.go:130] > Mar 18 12:44:40 multinode-642600 dockerd[1042]: 2024/03/18 12:44:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113555    5712 command_runner.go:130] > Mar 18 12:44:40 multinode-642600 dockerd[1042]: 2024/03/18 12:44:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113555    5712 command_runner.go:130] > Mar 18 12:44:43 multinode-642600 dockerd[1042]: 2024/03/18 12:44:43 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113555    5712 command_runner.go:130] > Mar 18 12:44:43 multinode-642600 dockerd[1042]: 2024/03/18 12:44:43 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113687    5712 command_runner.go:130] > Mar 18 12:44:43 multinode-642600 dockerd[1042]: 2024/03/18 12:44:43 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113687    5712 command_runner.go:130] > Mar 18 12:44:43 multinode-642600 dockerd[1042]: 2024/03/18 12:44:43 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113687    5712 command_runner.go:130] > Mar 18 12:44:43 multinode-642600 dockerd[1042]: 2024/03/18 12:44:43 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113687    5712 command_runner.go:130] > Mar 18 12:44:43 multinode-642600 dockerd[1042]: 2024/03/18 12:44:43 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113687    5712 command_runner.go:130] > Mar 18 12:44:43 multinode-642600 dockerd[1042]: 2024/03/18 12:44:43 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113687    5712 command_runner.go:130] > Mar 18 12:44:43 multinode-642600 dockerd[1042]: 2024/03/18 12:44:43 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113687    5712 command_runner.go:130] > Mar 18 12:44:44 multinode-642600 dockerd[1042]: 2024/03/18 12:44:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113687    5712 command_runner.go:130] > Mar 18 12:44:44 multinode-642600 dockerd[1042]: 2024/03/18 12:44:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113687    5712 command_runner.go:130] > Mar 18 12:44:44 multinode-642600 dockerd[1042]: 2024/03/18 12:44:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113687    5712 command_runner.go:130] > Mar 18 12:44:44 multinode-642600 dockerd[1042]: 2024/03/18 12:44:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113687    5712 command_runner.go:130] > Mar 18 12:44:47 multinode-642600 dockerd[1042]: 2024/03/18 12:44:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.149837    5712 logs.go:123] Gathering logs for kubelet ...
	I0318 12:44:47.150876    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 12:44:47.183095    5712 command_runner.go:130] > Mar 18 12:43:20 multinode-642600 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0318 12:44:47.183584    5712 command_runner.go:130] > Mar 18 12:43:20 multinode-642600 kubelet[1388]: I0318 12:43:20.841405    1388 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0318 12:44:47.183584    5712 command_runner.go:130] > Mar 18 12:43:20 multinode-642600 kubelet[1388]: I0318 12:43:20.841736    1388 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:47.183584    5712 command_runner.go:130] > Mar 18 12:43:20 multinode-642600 kubelet[1388]: I0318 12:43:20.842325    1388 server.go:895] "Client rotation is on, will bootstrap in background"
	I0318 12:44:47.183584    5712 command_runner.go:130] > Mar 18 12:43:20 multinode-642600 kubelet[1388]: E0318 12:43:20.842583    1388 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0318 12:44:47.183705    5712 command_runner.go:130] > Mar 18 12:43:20 multinode-642600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0318 12:44:47.183705    5712 command_runner.go:130] > Mar 18 12:43:20 multinode-642600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0318 12:44:47.183705    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0318 12:44:47.183705    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0318 12:44:47.183831    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0318 12:44:47.183831    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 kubelet[1445]: I0318 12:43:21.629315    1445 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0318 12:44:47.183831    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 kubelet[1445]: I0318 12:43:21.629808    1445 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:47.183831    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 kubelet[1445]: I0318 12:43:21.631096    1445 server.go:895] "Client rotation is on, will bootstrap in background"
	I0318 12:44:47.183962    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 kubelet[1445]: E0318 12:43:21.631229    1445 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0318 12:44:47.183962    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0318 12:44:47.183962    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0318 12:44:47.183962    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0318 12:44:47.183962    5712 command_runner.go:130] > Mar 18 12:43:23 multinode-642600 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0318 12:44:47.183962    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.100950    1523 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0318 12:44:47.184073    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.101311    1523 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:47.184073    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.101646    1523 server.go:895] "Client rotation is on, will bootstrap in background"
	I0318 12:44:47.184073    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.108175    1523 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0318 12:44:47.184073    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.123413    1523 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 12:44:47.184303    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.204504    1523 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0318 12:44:47.184303    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.205069    1523 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0318 12:44:47.184408    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.205344    1523 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","To
pologyManagerPolicyOptions":null}
	I0318 12:44:47.184408    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.205667    1523 topology_manager.go:138] "Creating topology manager with none policy"
	I0318 12:44:47.184408    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.205685    1523 container_manager_linux.go:301] "Creating device plugin manager"
	I0318 12:44:47.184408    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.206240    1523 state_mem.go:36] "Initialized new in-memory state store"
	I0318 12:44:47.184491    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.208674    1523 kubelet.go:393] "Attempting to sync node with API server"
	I0318 12:44:47.184528    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.208817    1523 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0318 12:44:47.184528    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.209351    1523 kubelet.go:309] "Adding apiserver pod source"
	I0318 12:44:47.184528    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.209491    1523 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0318 12:44:47.184597    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: W0318 12:43:24.212857    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-642600&limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:47.184597    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.213311    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-642600&limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:47.184597    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: W0318 12:43:24.219866    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:47.184684    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.220057    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:47.184719    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.240215    1523 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="docker" version="25.0.4" apiVersion="v1"
	I0318 12:44:47.184747    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: W0318 12:43:24.245761    1523 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0318 12:44:47.184747    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.248742    1523 server.go:1232] "Started kubelet"
	I0318 12:44:47.184747    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.249814    1523 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
	I0318 12:44:47.184747    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.251561    1523 server.go:462] "Adding debug handlers to kubelet server"
	I0318 12:44:47.184747    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.254285    1523 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10
	I0318 12:44:47.184747    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.255480    1523 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0318 12:44:47.184747    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.255659    1523 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"multinode-642600.17bddc6f5820f7a9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-642600", UID:"multinode-642600", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"multinode-642600"}, FirstTimestamp:time.Date(2024, ti
me.March, 18, 12, 43, 24, 248692649, time.Local), LastTimestamp:time.Date(2024, time.March, 18, 12, 43, 24, 248692649, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"multinode-642600"}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 172.25.148.129:8443: connect: connection refused'(may retry after sleeping)
	I0318 12:44:47.184747    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.259469    1523 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0318 12:44:47.184747    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.261490    1523 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0318 12:44:47.184747    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.265275    1523 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
	I0318 12:44:47.184747    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.270368    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-642600?timeout=10s\": dial tcp 172.25.148.129:8443: connect: connection refused" interval="200ms"
	I0318 12:44:47.184747    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: W0318 12:43:24.275611    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:47.184747    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.275814    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:47.184747    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.317069    1523 reconciler_new.go:29] "Reconciler: start to sync state"
	I0318 12:44:47.184747    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.327943    1523 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0318 12:44:47.184747    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.327963    1523 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0318 12:44:47.184747    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.327985    1523 state_mem.go:36] "Initialized new in-memory state store"
	I0318 12:44:47.184747    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.329007    1523 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0318 12:44:47.185367    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.329047    1523 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0318 12:44:47.185367    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.329057    1523 policy_none.go:49] "None policy: Start"
	I0318 12:44:47.185367    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.336597    1523 memory_manager.go:169] "Starting memorymanager" policy="None"
	I0318 12:44:47.185367    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.336631    1523 state_mem.go:35] "Initializing new in-memory state store"
	I0318 12:44:47.185367    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.337548    1523 state_mem.go:75] "Updated machine memory state"
	I0318 12:44:47.185367    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.345495    1523 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0318 12:44:47.185367    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.348154    1523 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0318 12:44:47.185559    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.351399    1523 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0318 12:44:47.185559    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.355603    1523 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0318 12:44:47.185559    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.356232    1523 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0318 12:44:47.185559    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.357037    1523 kubelet.go:2303] "Starting kubelet main sync loop"
	I0318 12:44:47.185660    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.359069    1523 kubelet.go:2327] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0318 12:44:47.185660    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: W0318 12:43:24.367050    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:47.185660    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.367230    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:47.185660    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.387242    1523 iptables.go:575] "Could not set up iptables canary" err=<
	I0318 12:44:47.185660    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0318 12:44:47.185818    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0318 12:44:47.185818    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0318 12:44:47.185818    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0318 12:44:47.185905    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.387428    1523 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-642600\" not found"
	I0318 12:44:47.185905    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.399151    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-642600"
	I0318 12:44:47.185905    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.399841    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.148.129:8443: connect: connection refused" node="multinode-642600"
	I0318 12:44:47.185977    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.460339    1523 topology_manager.go:215] "Topology Admit Handler" podUID="d5f09afee1a6ef36657c1ae3335ddda6" podNamespace="kube-system" podName="etcd-multinode-642600"
	I0318 12:44:47.185977    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.472389    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-642600?timeout=10s\": dial tcp 172.25.148.129:8443: connect: connection refused" interval="400ms"
	I0318 12:44:47.186119    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.474475    1523 topology_manager.go:215] "Topology Admit Handler" podUID="624de65f019baf96d4a0e2fb6064e413" podNamespace="kube-system" podName="kube-apiserver-multinode-642600"
	I0318 12:44:47.186147    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.487469    1523 topology_manager.go:215] "Topology Admit Handler" podUID="a1608bc774d0b3e96e1b6fbbded5cb52" podNamespace="kube-system" podName="kube-controller-manager-multinode-642600"
	I0318 12:44:47.186147    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.500311    1523 topology_manager.go:215] "Topology Admit Handler" podUID="cf50844b540be8ed0b3e767db413ac8f" podNamespace="kube-system" podName="kube-scheduler-multinode-642600"
	I0318 12:44:47.186147    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.527553    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/d5f09afee1a6ef36657c1ae3335ddda6-etcd-certs\") pod \"etcd-multinode-642600\" (UID: \"d5f09afee1a6ef36657c1ae3335ddda6\") " pod="kube-system/etcd-multinode-642600"
	I0318 12:44:47.186147    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.527604    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/d5f09afee1a6ef36657c1ae3335ddda6-etcd-data\") pod \"etcd-multinode-642600\" (UID: \"d5f09afee1a6ef36657c1ae3335ddda6\") " pod="kube-system/etcd-multinode-642600"
	I0318 12:44:47.186147    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.534726    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed38da653fbefea9aeb0ebdb91f985394a7a792571704a4875018f5a6bc9abda"
	I0318 12:44:47.186147    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.534857    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d766c4514f0bf79902b72d04d9e3a09fc2bcf5ef330f41cd3e84e63f5151f2b6"
	I0318 12:44:47.186147    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.534873    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f100b1062a56929e04e6e4377055b065d93a28c504f060cce4695165a2c33db0"
	I0318 12:44:47.186147    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.534885    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a9b4c05a5ccd5364b8dac2797803c98520c4f98df0fba77af7521af64a15152"
	I0318 12:44:47.186147    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.534943    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f4709a3a45a45f0c67f457df8bb202ea2867cfedeaec4a164509190df13f510"
	I0318 12:44:47.186147    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.534961    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3500a9f1ca84ed3d58cdd473a0c7c47a59643858c05dfd90247a09b1b43302bd"
	I0318 12:44:47.186147    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.552869    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aad98ae0cd7c7708c7e02f0b23fc33f1ca2b404bd7fec324c21beefcbe17d009"
	I0318 12:44:47.186147    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.571969    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29bb4d534c2e2b00dfe907d4443637851e3c3348e31bf00939cd6efad71c4e2e"
	I0318 12:44:47.186147    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.589127    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fef37141be6db2ba71fd0f1d2feee00d6ab5d31d607323e4f5ffab4a3e70cfa5"
	I0318 12:44:47.186147    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.614112    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-642600"
	I0318 12:44:47.186147    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.616006    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.148.129:8443: connect: connection refused" node="multinode-642600"
	I0318 12:44:47.186702    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629143    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a1608bc774d0b3e96e1b6fbbded5cb52-flexvolume-dir\") pod \"kube-controller-manager-multinode-642600\" (UID: \"a1608bc774d0b3e96e1b6fbbded5cb52\") " pod="kube-system/kube-controller-manager-multinode-642600"
	I0318 12:44:47.186793    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629404    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a1608bc774d0b3e96e1b6fbbded5cb52-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-642600\" (UID: \"a1608bc774d0b3e96e1b6fbbded5cb52\") " pod="kube-system/kube-controller-manager-multinode-642600"
	I0318 12:44:47.186880    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629689    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/624de65f019baf96d4a0e2fb6064e413-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-642600\" (UID: \"624de65f019baf96d4a0e2fb6064e413\") " pod="kube-system/kube-apiserver-multinode-642600"
	I0318 12:44:47.186880    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629754    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a1608bc774d0b3e96e1b6fbbded5cb52-ca-certs\") pod \"kube-controller-manager-multinode-642600\" (UID: \"a1608bc774d0b3e96e1b6fbbded5cb52\") " pod="kube-system/kube-controller-manager-multinode-642600"
	I0318 12:44:47.186947    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629780    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a1608bc774d0b3e96e1b6fbbded5cb52-k8s-certs\") pod \"kube-controller-manager-multinode-642600\" (UID: \"a1608bc774d0b3e96e1b6fbbded5cb52\") " pod="kube-system/kube-controller-manager-multinode-642600"
	I0318 12:44:47.187032    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629802    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1608bc774d0b3e96e1b6fbbded5cb52-kubeconfig\") pod \"kube-controller-manager-multinode-642600\" (UID: \"a1608bc774d0b3e96e1b6fbbded5cb52\") " pod="kube-system/kube-controller-manager-multinode-642600"
	I0318 12:44:47.187032    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629825    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cf50844b540be8ed0b3e767db413ac8f-kubeconfig\") pod \"kube-scheduler-multinode-642600\" (UID: \"cf50844b540be8ed0b3e767db413ac8f\") " pod="kube-system/kube-scheduler-multinode-642600"
	I0318 12:44:47.187126    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629860    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/624de65f019baf96d4a0e2fb6064e413-ca-certs\") pod \"kube-apiserver-multinode-642600\" (UID: \"624de65f019baf96d4a0e2fb6064e413\") " pod="kube-system/kube-apiserver-multinode-642600"
	I0318 12:44:47.187126    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629919    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/624de65f019baf96d4a0e2fb6064e413-k8s-certs\") pod \"kube-apiserver-multinode-642600\" (UID: \"624de65f019baf96d4a0e2fb6064e413\") " pod="kube-system/kube-apiserver-multinode-642600"
	I0318 12:44:47.187202    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.875125    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-642600?timeout=10s\": dial tcp 172.25.148.129:8443: connect: connection refused" interval="800ms"
	I0318 12:44:47.187202    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: I0318 12:43:25.030740    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-642600"
	I0318 12:44:47.187202    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: E0318 12:43:25.031776    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.148.129:8443: connect: connection refused" node="multinode-642600"
	I0318 12:44:47.187330    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: W0318 12:43:25.266849    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:47.187330    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: E0318 12:43:25.266980    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:47.187330    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: I0318 12:43:25.674768    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7281d6e698ea2dc42d7d3093ccde32b770bf8367fdb58230694380f40daeb9f"
	I0318 12:44:47.187330    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: E0318 12:43:25.676706    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-642600?timeout=10s\": dial tcp 172.25.148.129:8443: connect: connection refused" interval="1.6s"
	I0318 12:44:47.187330    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: I0318 12:43:25.692553    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eca6768355c74817c50b811b96b5fcc93a181c4968c53d4d4b0d0252ff6dbd0a"
	I0318 12:44:47.187549    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: W0318 12:43:25.700976    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:47.187549    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: E0318 12:43:25.701062    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:47.187549    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: I0318 12:43:25.708111    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f62197122538f83943df8b19710794ea6ea9a9ffa884082a1a62435e9b152c3f"
	I0318 12:44:47.187549    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: W0318 12:43:25.731607    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:47.187671    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: E0318 12:43:25.731695    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:47.187671    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: W0318 12:43:25.790774    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-642600&limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:47.187751    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: E0318 12:43:25.790867    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-642600&limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:47.187751    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: I0318 12:43:25.868581    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-642600"
	I0318 12:44:47.187829    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: E0318 12:43:25.869663    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.148.129:8443: connect: connection refused" node="multinode-642600"
	I0318 12:44:47.187906    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 kubelet[1523]: E0318 12:43:26.129309    1523 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"multinode-642600.17bddc6f5820f7a9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-642600", UID:"multinode-642600", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"multinode-642600"}, FirstTimestamp:time.Date(2024, ti
me.March, 18, 12, 43, 24, 248692649, time.Local), LastTimestamp:time.Date(2024, time.March, 18, 12, 43, 24, 248692649, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"multinode-642600"}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 172.25.148.129:8443: connect: connection refused'(may retry after sleeping)
	I0318 12:44:47.187906    5712 command_runner.go:130] > Mar 18 12:43:27 multinode-642600 kubelet[1523]: I0318 12:43:27.488157    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-642600"
	I0318 12:44:47.187906    5712 command_runner.go:130] > Mar 18 12:43:30 multinode-642600 kubelet[1523]: I0318 12:43:30.626198    1523 kubelet_node_status.go:108] "Node was previously registered" node="multinode-642600"
	I0318 12:44:47.188003    5712 command_runner.go:130] > Mar 18 12:43:30 multinode-642600 kubelet[1523]: I0318 12:43:30.626989    1523 kubelet_node_status.go:73] "Successfully registered node" node="multinode-642600"
	I0318 12:44:47.188003    5712 command_runner.go:130] > Mar 18 12:43:30 multinode-642600 kubelet[1523]: I0318 12:43:30.640050    1523 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0318 12:44:47.188003    5712 command_runner.go:130] > Mar 18 12:43:30 multinode-642600 kubelet[1523]: I0318 12:43:30.642279    1523 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0318 12:44:47.188081    5712 command_runner.go:130] > Mar 18 12:43:30 multinode-642600 kubelet[1523]: I0318 12:43:30.658382    1523 setters.go:552] "Node became not ready" node="multinode-642600" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-03-18T12:43:30Z","lastTransitionTime":"2024-03-18T12:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0318 12:44:47.188081    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.223393    1523 apiserver.go:52] "Watching apiserver"
	I0318 12:44:47.188081    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.230566    1523 topology_manager.go:215] "Topology Admit Handler" podUID="acd9d7a0-0e27-4bbb-8562-6fbf374742ca" podNamespace="kube-system" podName="kindnet-kpt4f"
	I0318 12:44:47.188081    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.231421    1523 topology_manager.go:215] "Topology Admit Handler" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b" podNamespace="kube-system" podName="coredns-5dd5756b68-fgn7v"
	I0318 12:44:47.188312    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.231644    1523 topology_manager.go:215] "Topology Admit Handler" podUID="449242c2-ad12-4da5-b339-3be7ab8a9b16" podNamespace="kube-system" podName="kube-proxy-4dg79"
	I0318 12:44:47.188312    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.231779    1523 topology_manager.go:215] "Topology Admit Handler" podUID="d2718b8a-26a9-4c86-bf9a-221d1ee23ceb" podNamespace="kube-system" podName="storage-provisioner"
	I0318 12:44:47.188312    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.231939    1523 topology_manager.go:215] "Topology Admit Handler" podUID="45969c0e-ac43-459e-95c0-86f7b76947db" podNamespace="default" podName="busybox-5b5d89c9d6-48qkw"
	I0318 12:44:47.188421    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.232191    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:47.188421    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.233435    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:47.188421    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.235227    1523 kubelet.go:1872] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-642600" podUID="4aa98cb9-f6ab-40b3-8c15-235ba4e09985"
	I0318 12:44:47.188506    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.236365    1523 kubelet.go:1872] "Trying to delete pod" pod="kube-system/etcd-multinode-642600" podUID="237133d7-6f1a-42ee-8cf2-a2d7564d67fc"
	I0318 12:44:47.188506    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.266715    1523 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	I0318 12:44:47.188589    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.289094    1523 kubelet.go:1877] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-642600"
	I0318 12:44:47.188671    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.301996    1523 kubelet.go:1877] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-642600"
	I0318 12:44:47.188671    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.322408    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/449242c2-ad12-4da5-b339-3be7ab8a9b16-lib-modules\") pod \"kube-proxy-4dg79\" (UID: \"449242c2-ad12-4da5-b339-3be7ab8a9b16\") " pod="kube-system/kube-proxy-4dg79"
	I0318 12:44:47.188671    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.322793    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/acd9d7a0-0e27-4bbb-8562-6fbf374742ca-xtables-lock\") pod \"kindnet-kpt4f\" (UID: \"acd9d7a0-0e27-4bbb-8562-6fbf374742ca\") " pod="kube-system/kindnet-kpt4f"
	I0318 12:44:47.188775    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.323081    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d2718b8a-26a9-4c86-bf9a-221d1ee23ceb-tmp\") pod \"storage-provisioner\" (UID: \"d2718b8a-26a9-4c86-bf9a-221d1ee23ceb\") " pod="kube-system/storage-provisioner"
	I0318 12:44:47.188775    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.323213    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/acd9d7a0-0e27-4bbb-8562-6fbf374742ca-cni-cfg\") pod \"kindnet-kpt4f\" (UID: \"acd9d7a0-0e27-4bbb-8562-6fbf374742ca\") " pod="kube-system/kindnet-kpt4f"
	I0318 12:44:47.188775    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.323245    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/449242c2-ad12-4da5-b339-3be7ab8a9b16-xtables-lock\") pod \"kube-proxy-4dg79\" (UID: \"449242c2-ad12-4da5-b339-3be7ab8a9b16\") " pod="kube-system/kube-proxy-4dg79"
	I0318 12:44:47.188891    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.323294    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/acd9d7a0-0e27-4bbb-8562-6fbf374742ca-lib-modules\") pod \"kindnet-kpt4f\" (UID: \"acd9d7a0-0e27-4bbb-8562-6fbf374742ca\") " pod="kube-system/kindnet-kpt4f"
	I0318 12:44:47.188891    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.324469    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 12:44:47.189001    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.324580    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume podName:7bc52797-b4bd-4046-b3d5-fae9c8ccd13b nodeName:}" failed. No retries permitted until 2024-03-18 12:43:31.824540428 +0000 UTC m=+7.835780164 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume") pod "coredns-5dd5756b68-fgn7v" (UID: "7bc52797-b4bd-4046-b3d5-fae9c8ccd13b") : object "kube-system"/"coredns" not registered
	I0318 12:44:47.189001    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.339515    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:47.189001    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.339554    1523 projected.go:198] Error preparing data for projected volume kube-api-access-9g8n5 for pod default/busybox-5b5d89c9d6-48qkw: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:47.189001    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.339661    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5 podName:45969c0e-ac43-459e-95c0-86f7b76947db nodeName:}" failed. No retries permitted until 2024-03-18 12:43:31.839645304 +0000 UTC m=+7.850885040 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9g8n5" (UniqueName: "kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5") pod "busybox-5b5d89c9d6-48qkw" (UID: "45969c0e-ac43-459e-95c0-86f7b76947db") : object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:47.189124    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.384452    1523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-multinode-642600" podStartSLOduration=0.384368133 podCreationTimestamp="2024-03-18 12:43:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-18 12:43:31.360389871 +0000 UTC m=+7.371629607" watchObservedRunningTime="2024-03-18 12:43:31.384368133 +0000 UTC m=+7.395607769"
	I0318 12:44:47.189199    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.431280    1523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-642600" podStartSLOduration=0.431225058 podCreationTimestamp="2024-03-18 12:43:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-18 12:43:31.388015127 +0000 UTC m=+7.399254863" watchObservedRunningTime="2024-03-18 12:43:31.431225058 +0000 UTC m=+7.442464794"
	I0318 12:44:47.189237    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.828430    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 12:44:47.189237    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.828605    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume podName:7bc52797-b4bd-4046-b3d5-fae9c8ccd13b nodeName:}" failed. No retries permitted until 2024-03-18 12:43:32.828568222 +0000 UTC m=+8.839807858 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume") pod "coredns-5dd5756b68-fgn7v" (UID: "7bc52797-b4bd-4046-b3d5-fae9c8ccd13b") : object "kube-system"/"coredns" not registered
	I0318 12:44:47.189326    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.930285    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:47.189326    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.930420    1523 projected.go:198] Error preparing data for projected volume kube-api-access-9g8n5 for pod default/busybox-5b5d89c9d6-48qkw: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:47.189326    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.930532    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5 podName:45969c0e-ac43-459e-95c0-86f7b76947db nodeName:}" failed. No retries permitted until 2024-03-18 12:43:32.930496159 +0000 UTC m=+8.941735795 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-9g8n5" (UniqueName: "kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5") pod "busybox-5b5d89c9d6-48qkw" (UID: "45969c0e-ac43-459e-95c0-86f7b76947db") : object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:47.189435    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: I0318 12:43:32.133795    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="889c16eb0ab731956d02a28d0337dc6ff349dc574ba10d4fc1a939fb2e09d6d3"
	I0318 12:44:47.189435    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: I0318 12:43:32.147805    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a2f0ccaf5c4c6c0019124eda20c358dfa8aa20f0c92ade10aa3de32608e3527"
	I0318 12:44:47.189435    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: I0318 12:43:32.369742    1523 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d04d3e415061983b742e6c14f1a5f562" path="/var/lib/kubelet/pods/d04d3e415061983b742e6c14f1a5f562/volumes"
	I0318 12:44:47.189536    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: I0318 12:43:32.371223    1523 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ec96a596e22f5afedbd92a854d1b8bec" path="/var/lib/kubelet/pods/ec96a596e22f5afedbd92a854d1b8bec/volumes"
	I0318 12:44:47.189536    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: I0318 12:43:32.628360    1523 kubelet.go:1872] "Trying to delete pod" pod="kube-system/etcd-multinode-642600" podUID="237133d7-6f1a-42ee-8cf2-a2d7564d67fc"
	I0318 12:44:47.189536    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: I0318 12:43:32.628590    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ecbdcbdad3fa79af8ef70896ae67d65b14c47b5811078c5d6d167e0f295d1bc"
	I0318 12:44:47.189648    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: E0318 12:43:32.836390    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 12:44:47.189785    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: E0318 12:43:32.836523    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume podName:7bc52797-b4bd-4046-b3d5-fae9c8ccd13b nodeName:}" failed. No retries permitted until 2024-03-18 12:43:34.836498609 +0000 UTC m=+10.847738345 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume") pod "coredns-5dd5756b68-fgn7v" (UID: "7bc52797-b4bd-4046-b3d5-fae9c8ccd13b") : object "kube-system"/"coredns" not registered
	I0318 12:44:47.189815    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: E0318 12:43:32.937295    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:47.189815    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: E0318 12:43:32.937349    1523 projected.go:198] Error preparing data for projected volume kube-api-access-9g8n5 for pod default/busybox-5b5d89c9d6-48qkw: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:47.189815    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: E0318 12:43:32.937443    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5 podName:45969c0e-ac43-459e-95c0-86f7b76947db nodeName:}" failed. No retries permitted until 2024-03-18 12:43:34.937423048 +0000 UTC m=+10.948662684 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-9g8n5" (UniqueName: "kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5") pod "busybox-5b5d89c9d6-48qkw" (UID: "45969c0e-ac43-459e-95c0-86f7b76947db") : object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:47.189815    5712 command_runner.go:130] > Mar 18 12:43:33 multinode-642600 kubelet[1523]: E0318 12:43:33.359564    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:47.189815    5712 command_runner.go:130] > Mar 18 12:43:33 multinode-642600 kubelet[1523]: E0318 12:43:33.359732    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:47.189815    5712 command_runner.go:130] > Mar 18 12:43:34 multinode-642600 kubelet[1523]: E0318 12:43:34.409996    1523 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0318 12:44:47.189815    5712 command_runner.go:130] > Mar 18 12:43:34 multinode-642600 kubelet[1523]: E0318 12:43:34.855132    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 12:44:47.189815    5712 command_runner.go:130] > Mar 18 12:43:34 multinode-642600 kubelet[1523]: E0318 12:43:34.855288    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume podName:7bc52797-b4bd-4046-b3d5-fae9c8ccd13b nodeName:}" failed. No retries permitted until 2024-03-18 12:43:38.85526758 +0000 UTC m=+14.866507216 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume") pod "coredns-5dd5756b68-fgn7v" (UID: "7bc52797-b4bd-4046-b3d5-fae9c8ccd13b") : object "kube-system"/"coredns" not registered
	I0318 12:44:47.189815    5712 command_runner.go:130] > Mar 18 12:43:34 multinode-642600 kubelet[1523]: E0318 12:43:34.955668    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:47.189815    5712 command_runner.go:130] > Mar 18 12:43:34 multinode-642600 kubelet[1523]: E0318 12:43:34.955718    1523 projected.go:198] Error preparing data for projected volume kube-api-access-9g8n5 for pod default/busybox-5b5d89c9d6-48qkw: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:47.189815    5712 command_runner.go:130] > Mar 18 12:43:34 multinode-642600 kubelet[1523]: E0318 12:43:34.955777    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5 podName:45969c0e-ac43-459e-95c0-86f7b76947db nodeName:}" failed. No retries permitted until 2024-03-18 12:43:38.955759519 +0000 UTC m=+14.966999155 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-9g8n5" (UniqueName: "kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5") pod "busybox-5b5d89c9d6-48qkw" (UID: "45969c0e-ac43-459e-95c0-86f7b76947db") : object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:47.189815    5712 command_runner.go:130] > Mar 18 12:43:35 multinode-642600 kubelet[1523]: E0318 12:43:35.360249    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:47.189815    5712 command_runner.go:130] > Mar 18 12:43:35 multinode-642600 kubelet[1523]: E0318 12:43:35.360337    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:47.189815    5712 command_runner.go:130] > Mar 18 12:43:37 multinode-642600 kubelet[1523]: E0318 12:43:37.360005    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:47.190363    5712 command_runner.go:130] > Mar 18 12:43:37 multinode-642600 kubelet[1523]: E0318 12:43:37.360005    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:47.190363    5712 command_runner.go:130] > Mar 18 12:43:38 multinode-642600 kubelet[1523]: E0318 12:43:38.890447    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 12:44:47.190363    5712 command_runner.go:130] > Mar 18 12:43:38 multinode-642600 kubelet[1523]: E0318 12:43:38.890642    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume podName:7bc52797-b4bd-4046-b3d5-fae9c8ccd13b nodeName:}" failed. No retries permitted until 2024-03-18 12:43:46.890560586 +0000 UTC m=+22.901800222 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume") pod "coredns-5dd5756b68-fgn7v" (UID: "7bc52797-b4bd-4046-b3d5-fae9c8ccd13b") : object "kube-system"/"coredns" not registered
	I0318 12:44:47.190363    5712 command_runner.go:130] > Mar 18 12:43:38 multinode-642600 kubelet[1523]: E0318 12:43:38.991640    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:47.190482    5712 command_runner.go:130] > Mar 18 12:43:38 multinode-642600 kubelet[1523]: E0318 12:43:38.991754    1523 projected.go:198] Error preparing data for projected volume kube-api-access-9g8n5 for pod default/busybox-5b5d89c9d6-48qkw: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:47.190559    5712 command_runner.go:130] > Mar 18 12:43:38 multinode-642600 kubelet[1523]: E0318 12:43:38.991856    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5 podName:45969c0e-ac43-459e-95c0-86f7b76947db nodeName:}" failed. No retries permitted until 2024-03-18 12:43:46.991836746 +0000 UTC m=+23.003076482 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-9g8n5" (UniqueName: "kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5") pod "busybox-5b5d89c9d6-48qkw" (UID: "45969c0e-ac43-459e-95c0-86f7b76947db") : object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:47.190559    5712 command_runner.go:130] > Mar 18 12:43:39 multinode-642600 kubelet[1523]: E0318 12:43:39.360236    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:47.190559    5712 command_runner.go:130] > Mar 18 12:43:39 multinode-642600 kubelet[1523]: E0318 12:43:39.360508    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:47.190559    5712 command_runner.go:130] > Mar 18 12:43:39 multinode-642600 kubelet[1523]: E0318 12:43:39.425235    1523 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0318 12:44:47.190559    5712 command_runner.go:130] > Mar 18 12:43:41 multinode-642600 kubelet[1523]: E0318 12:43:41.360362    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:47.190559    5712 command_runner.go:130] > Mar 18 12:43:41 multinode-642600 kubelet[1523]: E0318 12:43:41.360863    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:47.190559    5712 command_runner.go:130] > Mar 18 12:43:43 multinode-642600 kubelet[1523]: E0318 12:43:43.359722    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:47.190559    5712 command_runner.go:130] > Mar 18 12:43:43 multinode-642600 kubelet[1523]: E0318 12:43:43.360308    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:47.190559    5712 command_runner.go:130] > Mar 18 12:43:44 multinode-642600 kubelet[1523]: E0318 12:43:44.438590    1523 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0318 12:44:47.190559    5712 command_runner.go:130] > Mar 18 12:43:45 multinode-642600 kubelet[1523]: E0318 12:43:45.360026    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:47.190559    5712 command_runner.go:130] > Mar 18 12:43:45 multinode-642600 kubelet[1523]: E0318 12:43:45.360101    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:47.190559    5712 command_runner.go:130] > Mar 18 12:43:46 multinode-642600 kubelet[1523]: E0318 12:43:46.970368    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 12:44:47.190559    5712 command_runner.go:130] > Mar 18 12:43:46 multinode-642600 kubelet[1523]: E0318 12:43:46.970583    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume podName:7bc52797-b4bd-4046-b3d5-fae9c8ccd13b nodeName:}" failed. No retries permitted until 2024-03-18 12:44:02.970562522 +0000 UTC m=+38.981802258 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume") pod "coredns-5dd5756b68-fgn7v" (UID: "7bc52797-b4bd-4046-b3d5-fae9c8ccd13b") : object "kube-system"/"coredns" not registered
	I0318 12:44:47.190559    5712 command_runner.go:130] > Mar 18 12:43:47 multinode-642600 kubelet[1523]: E0318 12:43:47.071352    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:47.190559    5712 command_runner.go:130] > Mar 18 12:43:47 multinode-642600 kubelet[1523]: E0318 12:43:47.071390    1523 projected.go:198] Error preparing data for projected volume kube-api-access-9g8n5 for pod default/busybox-5b5d89c9d6-48qkw: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:47.191139    5712 command_runner.go:130] > Mar 18 12:43:47 multinode-642600 kubelet[1523]: E0318 12:43:47.071448    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5 podName:45969c0e-ac43-459e-95c0-86f7b76947db nodeName:}" failed. No retries permitted until 2024-03-18 12:44:03.071430219 +0000 UTC m=+39.082669855 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-9g8n5" (UniqueName: "kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5") pod "busybox-5b5d89c9d6-48qkw" (UID: "45969c0e-ac43-459e-95c0-86f7b76947db") : object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:47.191139    5712 command_runner.go:130] > Mar 18 12:43:47 multinode-642600 kubelet[1523]: E0318 12:43:47.359847    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:47.191235    5712 command_runner.go:130] > Mar 18 12:43:47 multinode-642600 kubelet[1523]: E0318 12:43:47.360318    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:47.191281    5712 command_runner.go:130] > Mar 18 12:43:49 multinode-642600 kubelet[1523]: E0318 12:43:49.360074    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:47.191318    5712 command_runner.go:130] > Mar 18 12:43:49 multinode-642600 kubelet[1523]: E0318 12:43:49.360604    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:47.191351    5712 command_runner.go:130] > Mar 18 12:43:49 multinode-642600 kubelet[1523]: E0318 12:43:49.453099    1523 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0318 12:44:47.191381    5712 command_runner.go:130] > Mar 18 12:43:51 multinode-642600 kubelet[1523]: E0318 12:43:51.360369    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:47.191381    5712 command_runner.go:130] > Mar 18 12:43:51 multinode-642600 kubelet[1523]: E0318 12:43:51.361016    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:47.191381    5712 command_runner.go:130] > Mar 18 12:43:53 multinode-642600 kubelet[1523]: E0318 12:43:53.359799    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:47.191381    5712 command_runner.go:130] > Mar 18 12:43:53 multinode-642600 kubelet[1523]: E0318 12:43:53.359935    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:47.191381    5712 command_runner.go:130] > Mar 18 12:43:54 multinode-642600 kubelet[1523]: E0318 12:43:54.467487    1523 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0318 12:44:47.191381    5712 command_runner.go:130] > Mar 18 12:43:55 multinode-642600 kubelet[1523]: E0318 12:43:55.359513    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:47.191381    5712 command_runner.go:130] > Mar 18 12:43:55 multinode-642600 kubelet[1523]: E0318 12:43:55.360047    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:47.191381    5712 command_runner.go:130] > Mar 18 12:43:57 multinode-642600 kubelet[1523]: E0318 12:43:57.359796    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:47.191381    5712 command_runner.go:130] > Mar 18 12:43:57 multinode-642600 kubelet[1523]: E0318 12:43:57.359970    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:47.191381    5712 command_runner.go:130] > Mar 18 12:43:59 multinode-642600 kubelet[1523]: E0318 12:43:59.360327    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:47.191918    5712 command_runner.go:130] > Mar 18 12:43:59 multinode-642600 kubelet[1523]: E0318 12:43:59.360455    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:47.191918    5712 command_runner.go:130] > Mar 18 12:43:59 multinode-642600 kubelet[1523]: E0318 12:43:59.483297    1523 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0318 12:44:47.192010    5712 command_runner.go:130] > Mar 18 12:44:01 multinode-642600 kubelet[1523]: E0318 12:44:01.359691    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:47.192010    5712 command_runner.go:130] > Mar 18 12:44:01 multinode-642600 kubelet[1523]: E0318 12:44:01.360228    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:47.192125    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.032626    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 12:44:47.192125    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.032722    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume podName:7bc52797-b4bd-4046-b3d5-fae9c8ccd13b nodeName:}" failed. No retries permitted until 2024-03-18 12:44:35.0327033 +0000 UTC m=+71.043942936 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume") pod "coredns-5dd5756b68-fgn7v" (UID: "7bc52797-b4bd-4046-b3d5-fae9c8ccd13b") : object "kube-system"/"coredns" not registered
	I0318 12:44:47.192125    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.134727    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:47.192125    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.134857    1523 projected.go:198] Error preparing data for projected volume kube-api-access-9g8n5 for pod default/busybox-5b5d89c9d6-48qkw: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:47.192125    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.135073    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5 podName:45969c0e-ac43-459e-95c0-86f7b76947db nodeName:}" failed. No retries permitted until 2024-03-18 12:44:35.13505028 +0000 UTC m=+71.146289916 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-9g8n5" (UniqueName: "kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5") pod "busybox-5b5d89c9d6-48qkw" (UID: "45969c0e-ac43-459e-95c0-86f7b76947db") : object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:47.192125    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.360260    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:47.192125    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.360354    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:47.192125    5712 command_runner.go:130] > Mar 18 12:44:04 multinode-642600 kubelet[1523]: I0318 12:44:04.124509    1523 scope.go:117] "RemoveContainer" containerID="996fb0f2ade69129acd747fc5146ef4295cc7ebd79cae8e8f881a21393ddb74a"
	I0318 12:44:47.192125    5712 command_runner.go:130] > Mar 18 12:44:04 multinode-642600 kubelet[1523]: I0318 12:44:04.125880    1523 scope.go:117] "RemoveContainer" containerID="787ade2ea2cd019bbcde5523b750659f59b0187d9de4b757fba8a14f9a126460"
	I0318 12:44:47.192125    5712 command_runner.go:130] > Mar 18 12:44:04 multinode-642600 kubelet[1523]: E0318 12:44:04.127355    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d2718b8a-26a9-4c86-bf9a-221d1ee23ceb)\"" pod="kube-system/storage-provisioner" podUID="d2718b8a-26a9-4c86-bf9a-221d1ee23ceb"
	I0318 12:44:47.192125    5712 command_runner.go:130] > Mar 18 12:44:17 multinode-642600 kubelet[1523]: I0318 12:44:17.359956    1523 scope.go:117] "RemoveContainer" containerID="787ade2ea2cd019bbcde5523b750659f59b0187d9de4b757fba8a14f9a126460"
	I0318 12:44:47.192125    5712 command_runner.go:130] > Mar 18 12:44:24 multinode-642600 kubelet[1523]: I0318 12:44:24.325657    1523 scope.go:117] "RemoveContainer" containerID="301c80f8b38cb79f051755af6af0fb604c0eee0689fd1f2d22a66e0969a9583f"
	I0318 12:44:47.192125    5712 command_runner.go:130] > Mar 18 12:44:24 multinode-642600 kubelet[1523]: I0318 12:44:24.374630    1523 scope.go:117] "RemoveContainer" containerID="4b94d396876e5c7e3b8c69b01560d10ad95ff183ab3cc78a194276537cfd6cf5"
	I0318 12:44:47.192125    5712 command_runner.go:130] > Mar 18 12:44:24 multinode-642600 kubelet[1523]: E0318 12:44:24.399375    1523 iptables.go:575] "Could not set up iptables canary" err=<
	I0318 12:44:47.192125    5712 command_runner.go:130] > Mar 18 12:44:24 multinode-642600 kubelet[1523]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0318 12:44:47.192125    5712 command_runner.go:130] > Mar 18 12:44:24 multinode-642600 kubelet[1523]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0318 12:44:47.192125    5712 command_runner.go:130] > Mar 18 12:44:24 multinode-642600 kubelet[1523]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0318 12:44:47.192125    5712 command_runner.go:130] > Mar 18 12:44:24 multinode-642600 kubelet[1523]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0318 12:44:47.192659    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 kubelet[1523]: I0318 12:44:35.962288    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1b2432b0ed66a1175586c13232eb9b9239f18a4f9a86e2a0c5f48c1407fdb14"
	I0318 12:44:47.192659    5712 command_runner.go:130] > Mar 18 12:44:36 multinode-642600 kubelet[1523]: I0318 12:44:36.079817    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1090dd57409807a15613607fd810b67863a9dd9c5a8512d7a6720906641c7f26"
	I0318 12:44:47.239763    5712 logs.go:123] Gathering logs for dmesg ...
	I0318 12:44:47.239763    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 12:44:47.271820    5712 command_runner.go:130] > [Mar18 12:41] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0318 12:44:47.271874    5712 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0318 12:44:47.271874    5712 command_runner.go:130] > [  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0318 12:44:47.271874    5712 command_runner.go:130] > [  +0.129398] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0318 12:44:47.271874    5712 command_runner.go:130] > [  +0.023142] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0318 12:44:47.271874    5712 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0318 12:44:47.271874    5712 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0318 12:44:47.271874    5712 command_runner.go:130] > [  +0.067111] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0318 12:44:47.271874    5712 command_runner.go:130] > [  +0.023049] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0318 12:44:47.271874    5712 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0318 12:44:47.271874    5712 command_runner.go:130] > [  +5.633479] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0318 12:44:47.271874    5712 command_runner.go:130] > [  +0.746575] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0318 12:44:47.271874    5712 command_runner.go:130] > [  +1.948336] systemd-fstab-generator[113]: Ignoring "noauto" option for root device
	I0318 12:44:47.271874    5712 command_runner.go:130] > [  +7.356358] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0318 12:44:47.271874    5712 command_runner.go:130] > [  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0318 12:44:47.271874    5712 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0318 12:44:47.271874    5712 command_runner.go:130] > [Mar18 12:42] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	I0318 12:44:47.271874    5712 command_runner.go:130] > [  +0.196447] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	I0318 12:44:47.271874    5712 command_runner.go:130] > [Mar18 12:43] systemd-fstab-generator[969]: Ignoring "noauto" option for root device
	I0318 12:44:47.271874    5712 command_runner.go:130] > [  +0.116812] kauditd_printk_skb: 73 callbacks suppressed
	I0318 12:44:47.271874    5712 command_runner.go:130] > [  +0.565179] systemd-fstab-generator[1008]: Ignoring "noauto" option for root device
	I0318 12:44:47.271874    5712 command_runner.go:130] > [  +0.224131] systemd-fstab-generator[1020]: Ignoring "noauto" option for root device
	I0318 12:44:47.271874    5712 command_runner.go:130] > [  +0.243543] systemd-fstab-generator[1034]: Ignoring "noauto" option for root device
	I0318 12:44:47.271874    5712 command_runner.go:130] > [  +2.986318] systemd-fstab-generator[1219]: Ignoring "noauto" option for root device
	I0318 12:44:47.271874    5712 command_runner.go:130] > [  +0.197212] systemd-fstab-generator[1231]: Ignoring "noauto" option for root device
	I0318 12:44:47.271874    5712 command_runner.go:130] > [  +0.228503] systemd-fstab-generator[1243]: Ignoring "noauto" option for root device
	I0318 12:44:47.272424    5712 command_runner.go:130] > [  +0.297734] systemd-fstab-generator[1258]: Ignoring "noauto" option for root device
	I0318 12:44:47.272424    5712 command_runner.go:130] > [  +0.969011] systemd-fstab-generator[1381]: Ignoring "noauto" option for root device
	I0318 12:44:47.272424    5712 command_runner.go:130] > [  +0.114690] kauditd_printk_skb: 205 callbacks suppressed
	I0318 12:44:47.272424    5712 command_runner.go:130] > [  +3.575437] systemd-fstab-generator[1516]: Ignoring "noauto" option for root device
	I0318 12:44:47.272424    5712 command_runner.go:130] > [  +1.537938] kauditd_printk_skb: 44 callbacks suppressed
	I0318 12:44:47.272492    5712 command_runner.go:130] > [  +6.654182] kauditd_printk_skb: 30 callbacks suppressed
	I0318 12:44:47.272492    5712 command_runner.go:130] > [  +4.384606] systemd-fstab-generator[2563]: Ignoring "noauto" option for root device
	I0318 12:44:47.272492    5712 command_runner.go:130] > [  +7.200668] kauditd_printk_skb: 70 callbacks suppressed
	I0318 12:44:47.274746    5712 logs.go:123] Gathering logs for kube-controller-manager [14ae9398d33b] ...
	I0318 12:44:47.274818    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ae9398d33b"
	I0318 12:44:47.313572    5712 command_runner.go:130] ! I0318 12:43:27.406049       1 serving.go:348] Generated self-signed cert in-memory
	I0318 12:44:47.313572    5712 command_runner.go:130] ! I0318 12:43:29.733819       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0318 12:44:47.313572    5712 command_runner.go:130] ! I0318 12:43:29.734137       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:47.313658    5712 command_runner.go:130] ! I0318 12:43:29.737351       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 12:44:47.313658    5712 command_runner.go:130] ! I0318 12:43:29.737598       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 12:44:47.313751    5712 command_runner.go:130] ! I0318 12:43:29.739365       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0318 12:44:47.313751    5712 command_runner.go:130] ! I0318 12:43:29.740428       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 12:44:47.313751    5712 command_runner.go:130] ! I0318 12:43:32.581261       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0318 12:44:47.313751    5712 command_runner.go:130] ! I0318 12:43:32.597867       1 controllermanager.go:642] "Started controller" controller="persistentvolume-binder-controller"
	I0318 12:44:47.313816    5712 command_runner.go:130] ! I0318 12:43:32.602078       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0318 12:44:47.313816    5712 command_runner.go:130] ! I0318 12:43:32.602099       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0318 12:44:47.313816    5712 command_runner.go:130] ! I0318 12:43:32.605600       1 controllermanager.go:642] "Started controller" controller="persistentvolume-expander-controller"
	I0318 12:44:47.313816    5712 command_runner.go:130] ! I0318 12:43:32.605807       1 expand_controller.go:328] "Starting expand controller"
	I0318 12:44:47.313888    5712 command_runner.go:130] ! I0318 12:43:32.605957       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0318 12:44:47.313888    5712 command_runner.go:130] ! I0318 12:43:32.620725       1 controllermanager.go:642] "Started controller" controller="persistentvolume-protection-controller"
	I0318 12:44:47.313888    5712 command_runner.go:130] ! I0318 12:43:32.621286       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0318 12:44:47.313888    5712 command_runner.go:130] ! I0318 12:43:32.621374       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0318 12:44:47.313888    5712 command_runner.go:130] ! I0318 12:43:32.663010       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I0318 12:44:47.313946    5712 command_runner.go:130] ! I0318 12:43:32.663383       1 namespace_controller.go:197] "Starting namespace controller"
	I0318 12:44:47.313946    5712 command_runner.go:130] ! I0318 12:43:32.663451       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0318 12:44:47.313946    5712 command_runner.go:130] ! I0318 12:43:32.674431       1 controllermanager.go:642] "Started controller" controller="deployment-controller"
	I0318 12:44:47.313946    5712 command_runner.go:130] ! I0318 12:43:32.675030       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0318 12:44:47.313946    5712 command_runner.go:130] ! I0318 12:43:32.675045       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0318 12:44:47.314010    5712 command_runner.go:130] ! I0318 12:43:32.680220       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0318 12:44:47.314010    5712 command_runner.go:130] ! I0318 12:43:32.680236       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0318 12:44:47.314010    5712 command_runner.go:130] ! I0318 12:43:32.680266       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:47.314079    5712 command_runner.go:130] ! I0318 12:43:32.681919       1 shared_informer.go:318] Caches are synced for tokens
	I0318 12:44:47.314079    5712 command_runner.go:130] ! I0318 12:43:32.684132       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0318 12:44:47.314079    5712 command_runner.go:130] ! I0318 12:43:32.684147       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0318 12:44:47.314460    5712 command_runner.go:130] ! I0318 12:43:32.684164       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:47.314460    5712 command_runner.go:130] ! I0318 12:43:32.685811       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0318 12:44:47.314460    5712 command_runner.go:130] ! I0318 12:43:32.685845       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0318 12:44:47.314523    5712 command_runner.go:130] ! I0318 12:43:32.686123       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:47.314523    5712 command_runner.go:130] ! I0318 12:43:32.687526       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0318 12:44:47.314588    5712 command_runner.go:130] ! I0318 12:43:32.687845       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0318 12:44:47.314653    5712 command_runner.go:130] ! I0318 12:43:32.687858       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0318 12:44:47.314653    5712 command_runner.go:130] ! I0318 12:43:32.687918       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:47.314653    5712 command_runner.go:130] ! I0318 12:43:32.691958       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0318 12:44:47.314653    5712 command_runner.go:130] ! I0318 12:43:32.692673       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0318 12:44:47.314653    5712 command_runner.go:130] ! I0318 12:43:32.696192       1 controllermanager.go:642] "Started controller" controller="bootstrap-signer-controller"
	I0318 12:44:47.314653    5712 command_runner.go:130] ! I0318 12:43:32.696622       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0318 12:44:47.314653    5712 command_runner.go:130] ! I0318 12:43:32.701031       1 controllermanager.go:642] "Started controller" controller="token-cleaner-controller"
	I0318 12:44:47.314653    5712 command_runner.go:130] ! I0318 12:43:32.701415       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0318 12:44:47.314653    5712 command_runner.go:130] ! I0318 12:43:32.701449       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0318 12:44:47.314653    5712 command_runner.go:130] ! I0318 12:43:32.701458       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0318 12:44:47.314843    5712 command_runner.go:130] ! E0318 12:43:32.705162       1 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0318 12:44:47.314843    5712 command_runner.go:130] ! I0318 12:43:32.705349       1 controllermanager.go:620] "Warning: skipping controller" controller="service-lb-controller"
	I0318 12:44:47.314907    5712 command_runner.go:130] ! I0318 12:43:32.705364       1 core.go:228] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0318 12:44:47.314907    5712 command_runner.go:130] ! I0318 12:43:32.705376       1 controllermanager.go:620] "Warning: skipping controller" controller="node-route-controller"
	I0318 12:44:47.314907    5712 command_runner.go:130] ! I0318 12:43:32.750736       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0318 12:44:47.314965    5712 command_runner.go:130] ! I0318 12:43:32.751361       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0318 12:44:47.314965    5712 command_runner.go:130] ! W0318 12:43:32.751515       1 shared_informer.go:593] resyncPeriod 19h34m1.540802039s is smaller than resyncCheckPeriod 20h12m46.622656472s and the informer has already started. Changing it to 20h12m46.622656472s
	I0318 12:44:47.315004    5712 command_runner.go:130] ! I0318 12:43:32.752012       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0318 12:44:47.315050    5712 command_runner.go:130] ! I0318 12:43:32.752529       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0318 12:44:47.315050    5712 command_runner.go:130] ! I0318 12:43:32.752719       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0318 12:44:47.315050    5712 command_runner.go:130] ! I0318 12:43:32.752884       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0318 12:44:47.315116    5712 command_runner.go:130] ! I0318 12:43:32.753191       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0318 12:44:47.315116    5712 command_runner.go:130] ! I0318 12:43:32.753284       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0318 12:44:47.315116    5712 command_runner.go:130] ! I0318 12:43:32.753677       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0318 12:44:47.315116    5712 command_runner.go:130] ! I0318 12:43:32.753791       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0318 12:44:47.315193    5712 command_runner.go:130] ! I0318 12:43:32.753884       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0318 12:44:47.315193    5712 command_runner.go:130] ! I0318 12:43:32.754036       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0318 12:44:47.315193    5712 command_runner.go:130] ! I0318 12:43:32.754202       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0318 12:44:47.315193    5712 command_runner.go:130] ! I0318 12:43:32.754691       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0318 12:44:47.315258    5712 command_runner.go:130] ! I0318 12:43:32.755001       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0318 12:44:47.315258    5712 command_runner.go:130] ! I0318 12:43:32.755205       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0318 12:44:47.315340    5712 command_runner.go:130] ! I0318 12:43:32.755784       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0318 12:44:47.315340    5712 command_runner.go:130] ! I0318 12:43:32.755974       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0318 12:44:47.315399    5712 command_runner.go:130] ! I0318 12:43:32.756144       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0318 12:44:47.315399    5712 command_runner.go:130] ! I0318 12:43:32.756649       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0318 12:44:47.315399    5712 command_runner.go:130] ! I0318 12:43:32.756826       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0318 12:44:47.315399    5712 command_runner.go:130] ! I0318 12:43:32.757119       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 12:44:47.315465    5712 command_runner.go:130] ! I0318 12:43:32.757364       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0318 12:44:47.315465    5712 command_runner.go:130] ! I0318 12:43:32.757580       1 controllermanager.go:642] "Started controller" controller="resourcequota-controller"
	I0318 12:44:47.315524    5712 command_runner.go:130] ! E0318 12:43:32.773718       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0318 12:44:47.315524    5712 command_runner.go:130] ! I0318 12:43:32.773746       1 controllermanager.go:620] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0318 12:44:47.315524    5712 command_runner.go:130] ! I0318 12:43:32.786590       1 controllermanager.go:642] "Started controller" controller="ephemeral-volume-controller"
	I0318 12:44:47.315588    5712 command_runner.go:130] ! I0318 12:43:32.786978       1 controller.go:169] "Starting ephemeral volume controller"
	I0318 12:44:47.315588    5712 command_runner.go:130] ! I0318 12:43:32.787007       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0318 12:44:47.315588    5712 command_runner.go:130] ! I0318 12:43:32.795770       1 controllermanager.go:642] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0318 12:44:47.315588    5712 command_runner.go:130] ! I0318 12:43:32.798452       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0318 12:44:47.315588    5712 command_runner.go:130] ! I0318 12:43:32.798585       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0318 12:44:47.315653    5712 command_runner.go:130] ! I0318 12:43:32.801712       1 controllermanager.go:642] "Started controller" controller="serviceaccount-controller"
	I0318 12:44:47.315653    5712 command_runner.go:130] ! I0318 12:43:32.802261       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0318 12:44:47.315653    5712 command_runner.go:130] ! I0318 12:43:32.806063       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0318 12:44:47.315719    5712 command_runner.go:130] ! I0318 12:43:32.823560       1 controllermanager.go:642] "Started controller" controller="garbage-collector-controller"
	I0318 12:44:47.315719    5712 command_runner.go:130] ! I0318 12:43:32.823578       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0318 12:44:47.315719    5712 command_runner.go:130] ! I0318 12:43:32.823595       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 12:44:47.315719    5712 command_runner.go:130] ! I0318 12:43:32.823621       1 graph_builder.go:294] "Running" component="GraphBuilder"
	I0318 12:44:47.315785    5712 command_runner.go:130] ! I0318 12:43:32.833033       1 controllermanager.go:642] "Started controller" controller="daemonset-controller"
	I0318 12:44:47.315785    5712 command_runner.go:130] ! I0318 12:43:32.833480       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0318 12:44:47.315785    5712 command_runner.go:130] ! I0318 12:43:32.833494       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0318 12:44:47.315871    5712 command_runner.go:130] ! I0318 12:43:32.862160       1 node_lifecycle_controller.go:431] "Controller will reconcile labels"
	I0318 12:44:47.315871    5712 command_runner.go:130] ! I0318 12:43:32.862209       1 controllermanager.go:642] "Started controller" controller="node-lifecycle-controller"
	I0318 12:44:47.315871    5712 command_runner.go:130] ! I0318 12:43:32.862524       1 node_lifecycle_controller.go:465] "Sending events to api server"
	I0318 12:44:47.315871    5712 command_runner.go:130] ! I0318 12:43:32.862562       1 node_lifecycle_controller.go:476] "Starting node controller"
	I0318 12:44:47.315871    5712 command_runner.go:130] ! I0318 12:43:32.862573       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0318 12:44:47.315925    5712 command_runner.go:130] ! I0318 12:43:32.883369       1 controllermanager.go:642] "Started controller" controller="endpointslice-mirroring-controller"
	I0318 12:44:47.315925    5712 command_runner.go:130] ! I0318 12:43:32.886141       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0318 12:44:47.315977    5712 command_runner.go:130] ! I0318 12:43:32.886674       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0318 12:44:47.315977    5712 command_runner.go:130] ! I0318 12:43:32.896468       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I0318 12:44:47.315977    5712 command_runner.go:130] ! I0318 12:43:32.896951       1 stateful_set.go:161] "Starting stateful set controller"
	I0318 12:44:47.316049    5712 command_runner.go:130] ! I0318 12:43:32.897135       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0318 12:44:47.316049    5712 command_runner.go:130] ! I0318 12:43:32.900325       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0318 12:44:47.316049    5712 command_runner.go:130] ! I0318 12:43:32.900580       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0318 12:44:47.316104    5712 command_runner.go:130] ! I0318 12:43:32.903531       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0318 12:44:47.316104    5712 command_runner.go:130] ! I0318 12:43:32.917793       1 controllermanager.go:642] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0318 12:44:47.316104    5712 command_runner.go:130] ! I0318 12:43:32.918152       1 horizontal.go:200] "Starting HPA controller"
	I0318 12:44:47.316104    5712 command_runner.go:130] ! I0318 12:43:32.918638       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0318 12:44:47.316171    5712 command_runner.go:130] ! I0318 12:43:32.920489       1 controllermanager.go:642] "Started controller" controller="pod-garbage-collector-controller"
	I0318 12:44:47.316171    5712 command_runner.go:130] ! I0318 12:43:32.920802       1 gc_controller.go:101] "Starting GC controller"
	I0318 12:44:47.316171    5712 command_runner.go:130] ! I0318 12:43:32.922940       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0318 12:44:47.316171    5712 command_runner.go:130] ! I0318 12:43:32.923834       1 controllermanager.go:642] "Started controller" controller="endpointslice-controller"
	I0318 12:44:47.316235    5712 command_runner.go:130] ! I0318 12:43:32.924143       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0318 12:44:47.316235    5712 command_runner.go:130] ! I0318 12:43:32.924461       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0318 12:44:47.316235    5712 command_runner.go:130] ! I0318 12:43:32.935394       1 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
	I0318 12:44:47.316235    5712 command_runner.go:130] ! I0318 12:43:32.935610       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0318 12:44:47.316303    5712 command_runner.go:130] ! I0318 12:43:32.935623       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0318 12:44:47.316303    5712 command_runner.go:130] ! I0318 12:43:32.996434       1 controllermanager.go:642] "Started controller" controller="job-controller"
	I0318 12:44:47.316303    5712 command_runner.go:130] ! I0318 12:43:32.996586       1 job_controller.go:226] "Starting job controller"
	I0318 12:44:47.316303    5712 command_runner.go:130] ! I0318 12:43:32.996666       1 shared_informer.go:311] Waiting for caches to sync for job
	I0318 12:44:47.316370    5712 command_runner.go:130] ! I0318 12:43:33.085354       1 controllermanager.go:642] "Started controller" controller="disruption-controller"
	I0318 12:44:47.316370    5712 command_runner.go:130] ! I0318 12:43:33.086157       1 disruption.go:433] "Sending events to api server."
	I0318 12:44:47.316370    5712 command_runner.go:130] ! I0318 12:43:33.086235       1 disruption.go:444] "Starting disruption controller"
	I0318 12:44:47.316370    5712 command_runner.go:130] ! I0318 12:43:33.086245       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0318 12:44:47.316370    5712 command_runner.go:130] ! I0318 12:43:33.141477       1 controllermanager.go:642] "Started controller" controller="cronjob-controller"
	I0318 12:44:47.316370    5712 command_runner.go:130] ! I0318 12:43:33.142359       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0318 12:44:47.316467    5712 command_runner.go:130] ! I0318 12:43:33.142566       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0318 12:44:47.316467    5712 command_runner.go:130] ! I0318 12:43:33.186973       1 controllermanager.go:642] "Started controller" controller="endpoints-controller"
	I0318 12:44:47.316467    5712 command_runner.go:130] ! I0318 12:43:33.187335       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0318 12:44:47.316467    5712 command_runner.go:130] ! I0318 12:43:33.187410       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0318 12:44:47.316534    5712 command_runner.go:130] ! I0318 12:43:33.236517       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0318 12:44:47.316575    5712 command_runner.go:130] ! I0318 12:43:33.236982       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:33.237471       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:33.286539       1 controllermanager.go:642] "Started controller" controller="ttl-controller"
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:33.287154       1 ttl_controller.go:124] "Starting TTL controller"
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:33.287375       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.355688       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.355845       1 controllermanager.go:642] "Started controller" controller="node-ipam-controller"
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.356879       1 node_ipam_controller.go:162] "Starting ipam controller"
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.357033       1 shared_informer.go:311] Waiting for caches to sync for node
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.359716       1 controllermanager.go:642] "Started controller" controller="clusterrole-aggregation-controller"
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.361043       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.361062       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.364706       1 controllermanager.go:642] "Started controller" controller="ttl-after-finished-controller"
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.364861       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.364989       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.369492       1 controllermanager.go:642] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.369675       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.369706       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.375944       1 controllermanager.go:642] "Started controller" controller="replicaset-controller"
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.376145       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.377600       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.390058       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.405940       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600\" does not exist"
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.408115       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.408433       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.408623       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600-m02\" does not exist"
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.408708       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600-m03\" does not exist"
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.408817       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.421506       1 shared_informer.go:318] Caches are synced for PV protection
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.446678       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.459596       1 shared_informer.go:318] Caches are synced for node
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.459833       1 range_allocator.go:174] "Sending events to api server"
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.460258       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.460829       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.461091       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.461418       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0318 12:44:47.317249    5712 command_runner.go:130] ! I0318 12:43:43.463618       1 shared_informer.go:318] Caches are synced for namespace
	I0318 12:44:47.317249    5712 command_runner.go:130] ! I0318 12:43:43.466097       1 shared_informer.go:318] Caches are synced for taint
	I0318 12:44:47.317249    5712 command_runner.go:130] ! I0318 12:43:43.466427       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0318 12:44:47.317249    5712 command_runner.go:130] ! I0318 12:43:43.466639       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0318 12:44:47.317249    5712 command_runner.go:130] ! I0318 12:43:43.466863       1 taint_manager.go:210] "Sending events to api server"
	I0318 12:44:47.317249    5712 command_runner.go:130] ! I0318 12:43:43.468821       1 event.go:307] "Event occurred" object="multinode-642600" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600 event: Registered Node multinode-642600 in Controller"
	I0318 12:44:47.317249    5712 command_runner.go:130] ! I0318 12:43:43.469328       1 event.go:307] "Event occurred" object="multinode-642600-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600-m02 event: Registered Node multinode-642600-m02 in Controller"
	I0318 12:44:47.317249    5712 command_runner.go:130] ! I0318 12:43:43.469579       1 event.go:307] "Event occurred" object="multinode-642600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600-m03 event: Registered Node multinode-642600-m03 in Controller"
	I0318 12:44:47.317249    5712 command_runner.go:130] ! I0318 12:43:43.469959       1 shared_informer.go:318] Caches are synced for crt configmap
	I0318 12:44:47.317249    5712 command_runner.go:130] ! I0318 12:43:43.477268       1 shared_informer.go:318] Caches are synced for deployment
	I0318 12:44:47.317249    5712 command_runner.go:130] ! I0318 12:43:43.486297       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0318 12:44:47.317249    5712 command_runner.go:130] ! I0318 12:43:43.487082       1 shared_informer.go:318] Caches are synced for ephemeral
	I0318 12:44:47.317249    5712 command_runner.go:130] ! I0318 12:43:43.487171       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0318 12:44:47.317249    5712 command_runner.go:130] ! I0318 12:43:43.487768       1 shared_informer.go:318] Caches are synced for TTL
	I0318 12:44:47.317799    5712 command_runner.go:130] ! I0318 12:43:43.487848       1 shared_informer.go:318] Caches are synced for endpoint
	I0318 12:44:47.317799    5712 command_runner.go:130] ! I0318 12:43:43.489265       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0318 12:44:47.317799    5712 command_runner.go:130] ! I0318 12:43:43.497682       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0318 12:44:47.317799    5712 command_runner.go:130] ! I0318 12:43:43.498610       1 shared_informer.go:318] Caches are synced for stateful set
	I0318 12:44:47.317799    5712 command_runner.go:130] ! I0318 12:43:43.498725       1 shared_informer.go:318] Caches are synced for attach detach
	I0318 12:44:47.317799    5712 command_runner.go:130] ! I0318 12:43:43.501123       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-642600"
	I0318 12:44:47.317799    5712 command_runner.go:130] ! I0318 12:43:43.503362       1 shared_informer.go:318] Caches are synced for persistent volume
	I0318 12:44:47.317799    5712 command_runner.go:130] ! I0318 12:43:43.505991       1 shared_informer.go:318] Caches are synced for expand
	I0318 12:44:47.317799    5712 command_runner.go:130] ! I0318 12:43:43.503938       1 shared_informer.go:318] Caches are synced for PVC protection
	I0318 12:44:47.317799    5712 command_runner.go:130] ! I0318 12:43:43.506104       1 shared_informer.go:318] Caches are synced for service account
	I0318 12:44:47.317799    5712 command_runner.go:130] ! I0318 12:43:43.505782       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-642600-m02"
	I0318 12:44:47.317799    5712 command_runner.go:130] ! I0318 12:43:43.505818       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-642600-m03"
	I0318 12:44:47.318000    5712 command_runner.go:130] ! I0318 12:43:43.506356       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0318 12:44:47.318000    5712 command_runner.go:130] ! I0318 12:43:43.521010       1 shared_informer.go:318] Caches are synced for HPA
	I0318 12:44:47.318000    5712 command_runner.go:130] ! I0318 12:43:43.524230       1 shared_informer.go:318] Caches are synced for GC
	I0318 12:44:47.318000    5712 command_runner.go:130] ! I0318 12:43:43.527081       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0318 12:44:47.318000    5712 command_runner.go:130] ! I0318 12:43:43.534422       1 shared_informer.go:318] Caches are synced for daemon sets
	I0318 12:44:47.318000    5712 command_runner.go:130] ! I0318 12:43:43.537721       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0318 12:44:47.318000    5712 command_runner.go:130] ! I0318 12:43:43.545260       1 shared_informer.go:318] Caches are synced for cronjob
	I0318 12:44:47.318000    5712 command_runner.go:130] ! I0318 12:43:43.546769       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="57.454588ms"
	I0318 12:44:47.318115    5712 command_runner.go:130] ! I0318 12:43:43.547853       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="57.476888ms"
	I0318 12:44:47.318115    5712 command_runner.go:130] ! I0318 12:43:43.552128       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="66µs"
	I0318 12:44:47.318115    5712 command_runner.go:130] ! I0318 12:43:43.552429       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="130.199µs"
	I0318 12:44:47.318115    5712 command_runner.go:130] ! I0318 12:43:43.565701       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0318 12:44:47.318115    5712 command_runner.go:130] ! I0318 12:43:43.580927       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0318 12:44:47.318115    5712 command_runner.go:130] ! I0318 12:43:43.585098       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0318 12:44:47.318115    5712 command_runner.go:130] ! I0318 12:43:43.586663       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0318 12:44:47.318233    5712 command_runner.go:130] ! I0318 12:43:43.590461       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 12:44:47.318233    5712 command_runner.go:130] ! I0318 12:43:43.597830       1 shared_informer.go:318] Caches are synced for job
	I0318 12:44:47.318233    5712 command_runner.go:130] ! I0318 12:43:43.635734       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0318 12:44:47.318233    5712 command_runner.go:130] ! I0318 12:43:43.658493       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 12:44:47.318233    5712 command_runner.go:130] ! I0318 12:43:43.686534       1 shared_informer.go:318] Caches are synced for disruption
	I0318 12:44:47.318233    5712 command_runner.go:130] ! I0318 12:43:44.024395       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 12:44:47.318233    5712 command_runner.go:130] ! I0318 12:43:44.024760       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0318 12:44:47.318233    5712 command_runner.go:130] ! I0318 12:43:44.048280       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 12:44:47.318233    5712 command_runner.go:130] ! I0318 12:44:11.303411       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:47.318381    5712 command_runner.go:130] ! I0318 12:44:13.533509       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-48qkw" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-48qkw"
	I0318 12:44:47.318381    5712 command_runner.go:130] ! I0318 12:44:13.534203       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68-fgn7v" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-5dd5756b68-fgn7v"
	I0318 12:44:47.318381    5712 command_runner.go:130] ! I0318 12:44:13.534478       1 event.go:307] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I0318 12:44:47.318381    5712 command_runner.go:130] ! I0318 12:44:23.562573       1 event.go:307] "Event occurred" object="multinode-642600-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-642600-m02 status is now: NodeNotReady"
	I0318 12:44:47.318486    5712 command_runner.go:130] ! I0318 12:44:23.591486       1 event.go:307] "Event occurred" object="kube-system/kindnet-d5llj" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:47.318486    5712 command_runner.go:130] ! I0318 12:44:23.614671       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-vts9f" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:47.318486    5712 command_runner.go:130] ! I0318 12:44:23.639496       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-hmhdf" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:47.318486    5712 command_runner.go:130] ! I0318 12:44:23.661949       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="21.740356ms"
	I0318 12:44:47.318486    5712 command_runner.go:130] ! I0318 12:44:23.663289       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="50.499µs"
	I0318 12:44:47.318486    5712 command_runner.go:130] ! I0318 12:44:37.149797       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="76.1µs"
	I0318 12:44:47.318609    5712 command_runner.go:130] ! I0318 12:44:37.209300       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="28.125704ms"
	I0318 12:44:47.318609    5712 command_runner.go:130] ! I0318 12:44:37.209415       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="54.4µs"
	I0318 12:44:47.318609    5712 command_runner.go:130] ! I0318 12:44:37.245284       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="23.227968ms"
	I0318 12:44:47.318609    5712 command_runner.go:130] ! I0318 12:44:37.254358       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="3.872028ms"
	I0318 12:44:47.333870    5712 logs.go:123] Gathering logs for container status ...
	I0318 12:44:47.333870    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 12:44:47.439689    5712 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0318 12:44:47.439806    5712 command_runner.go:130] > 566e40ce923f7       8c811b4aec35f                                                                                         11 seconds ago       Running             busybox                   1                   e1b2432b0ed66       busybox-5b5d89c9d6-48qkw
	I0318 12:44:47.439806    5712 command_runner.go:130] > fcf17db92b351       ead0a4a53df89                                                                                         12 seconds ago       Running             coredns                   1                   1090dd5740980       coredns-5dd5756b68-fgn7v
	I0318 12:44:47.439806    5712 command_runner.go:130] > 4652c26c0904e       6e38f40d628db                                                                                         30 seconds ago       Running             storage-provisioner       2                   889c16eb0ab73       storage-provisioner
	I0318 12:44:47.439806    5712 command_runner.go:130] > 9fec05a61d2a9       4950bb10b3f87                                                                                         About a minute ago   Running             kindnet-cni               1                   5ecbdcbdad3fa       kindnet-kpt4f
	I0318 12:44:47.439956    5712 command_runner.go:130] > 787ade2ea2cd0       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   889c16eb0ab73       storage-provisioner
	I0318 12:44:47.439956    5712 command_runner.go:130] > 575b41a3a85a4       83f6cc407eed8                                                                                         About a minute ago   Running             kube-proxy                1                   7a2f0ccaf5c4c       kube-proxy-4dg79
	I0318 12:44:47.439956    5712 command_runner.go:130] > a48a6d310b868       7fe0e6f37db33                                                                                         About a minute ago   Running             kube-apiserver            0                   a7281d6e698ea       kube-apiserver-multinode-642600
	I0318 12:44:47.439956    5712 command_runner.go:130] > 14ae9398d33b1       d058aa5ab969c                                                                                         About a minute ago   Running             kube-controller-manager   1                   eca6768355c74       kube-controller-manager-multinode-642600
	I0318 12:44:47.440096    5712 command_runner.go:130] > bd1e4f4d262e3       e3db313c6dbc0                                                                                         About a minute ago   Running             kube-scheduler            1                   f62197122538f       kube-scheduler-multinode-642600
	I0318 12:44:47.440096    5712 command_runner.go:130] > 8e7911b58c587       73deb9a3f7025                                                                                         About a minute ago   Running             etcd                      0                   67004ee038ee4       etcd-multinode-642600
	I0318 12:44:47.440096    5712 command_runner.go:130] > a8dd2eacb7251       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   21 minutes ago       Exited              busybox                   0                   29bb4d534c2e2       busybox-5b5d89c9d6-48qkw
	I0318 12:44:47.440096    5712 command_runner.go:130] > e81f1d2fdb360       ead0a4a53df89                                                                                         25 minutes ago       Exited              coredns                   0                   ed38da653fbef       coredns-5dd5756b68-fgn7v
	I0318 12:44:47.440096    5712 command_runner.go:130] > 5cf42651cb21d       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              25 minutes ago       Exited              kindnet-cni               0                   fef37141be6db       kindnet-kpt4f
	I0318 12:44:47.440200    5712 command_runner.go:130] > 4bbad08fe59ac       83f6cc407eed8                                                                                         25 minutes ago       Exited              kube-proxy                0                   2f4709a3a45a4       kube-proxy-4dg79
	I0318 12:44:47.440200    5712 command_runner.go:130] > a54be44369019       d058aa5ab969c                                                                                         26 minutes ago       Exited              kube-controller-manager   0                   d766c4514f0bf       kube-controller-manager-multinode-642600
	I0318 12:44:47.440200    5712 command_runner.go:130] > 47777d4c0b90d       e3db313c6dbc0                                                                                         26 minutes ago       Exited              kube-scheduler            0                   3500a9f1ca84e       kube-scheduler-multinode-642600
	I0318 12:44:47.442785    5712 logs.go:123] Gathering logs for describe nodes ...
	I0318 12:44:47.442874    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 12:44:47.701711    5712 command_runner.go:130] > Name:               multinode-642600
	I0318 12:44:47.701711    5712 command_runner.go:130] > Roles:              control-plane
	I0318 12:44:47.701711    5712 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0318 12:44:47.701711    5712 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0318 12:44:47.701711    5712 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0318 12:44:47.701711    5712 command_runner.go:130] >                     kubernetes.io/hostname=multinode-642600
	I0318 12:44:47.701711    5712 command_runner.go:130] >                     kubernetes.io/os=linux
	I0318 12:44:47.701711    5712 command_runner.go:130] >                     minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd
	I0318 12:44:47.701711    5712 command_runner.go:130] >                     minikube.k8s.io/name=multinode-642600
	I0318 12:44:47.701711    5712 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0318 12:44:47.701711    5712 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_18T12_18_52_0700
	I0318 12:44:47.701711    5712 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0318 12:44:47.701711    5712 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0318 12:44:47.701711    5712 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0318 12:44:47.701711    5712 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0318 12:44:47.701711    5712 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0318 12:44:47.701711    5712 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0318 12:44:47.701711    5712 command_runner.go:130] > CreationTimestamp:  Mon, 18 Mar 2024 12:18:46 +0000
	I0318 12:44:47.701711    5712 command_runner.go:130] > Taints:             <none>
	I0318 12:44:47.701711    5712 command_runner.go:130] > Unschedulable:      false
	I0318 12:44:47.701711    5712 command_runner.go:130] > Lease:
	I0318 12:44:47.701711    5712 command_runner.go:130] >   HolderIdentity:  multinode-642600
	I0318 12:44:47.701711    5712 command_runner.go:130] >   AcquireTime:     <unset>
	I0318 12:44:47.701711    5712 command_runner.go:130] >   RenewTime:       Mon, 18 Mar 2024 12:44:41 +0000
	I0318 12:44:47.701711    5712 command_runner.go:130] > Conditions:
	I0318 12:44:47.701711    5712 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0318 12:44:47.702700    5712 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0318 12:44:47.702700    5712 command_runner.go:130] >   MemoryPressure   False   Mon, 18 Mar 2024 12:44:11 +0000   Mon, 18 Mar 2024 12:18:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0318 12:44:47.702700    5712 command_runner.go:130] >   DiskPressure     False   Mon, 18 Mar 2024 12:44:11 +0000   Mon, 18 Mar 2024 12:18:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0318 12:44:47.702700    5712 command_runner.go:130] >   PIDPressure      False   Mon, 18 Mar 2024 12:44:11 +0000   Mon, 18 Mar 2024 12:18:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Ready            True    Mon, 18 Mar 2024 12:44:11 +0000   Mon, 18 Mar 2024 12:44:11 +0000   KubeletReady                 kubelet is posting ready status
	I0318 12:44:47.702700    5712 command_runner.go:130] > Addresses:
	I0318 12:44:47.702700    5712 command_runner.go:130] >   InternalIP:  172.25.148.129
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Hostname:    multinode-642600
	I0318 12:44:47.702700    5712 command_runner.go:130] > Capacity:
	I0318 12:44:47.702700    5712 command_runner.go:130] >   cpu:                2
	I0318 12:44:47.702700    5712 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 12:44:47.702700    5712 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 12:44:47.702700    5712 command_runner.go:130] >   memory:             2164268Ki
	I0318 12:44:47.702700    5712 command_runner.go:130] >   pods:               110
	I0318 12:44:47.702700    5712 command_runner.go:130] > Allocatable:
	I0318 12:44:47.702700    5712 command_runner.go:130] >   cpu:                2
	I0318 12:44:47.702700    5712 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 12:44:47.702700    5712 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 12:44:47.702700    5712 command_runner.go:130] >   memory:             2164268Ki
	I0318 12:44:47.702700    5712 command_runner.go:130] >   pods:               110
	I0318 12:44:47.702700    5712 command_runner.go:130] > System Info:
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Machine ID:                 021cb44913fc4689ab25739f723ae3da
	I0318 12:44:47.702700    5712 command_runner.go:130] >   System UUID:                8a1bcbab-f132-7f42-b33a-a7db97e0afe6
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Boot ID:                    f11360a5-920e-4374-9d22-d06f111079d8
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Kernel Version:             5.10.207
	I0318 12:44:47.702700    5712 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Operating System:           linux
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Architecture:               amd64
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0318 12:44:47.702700    5712 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0318 12:44:47.702700    5712 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0318 12:44:47.702700    5712 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0318 12:44:47.702700    5712 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0318 12:44:47.702700    5712 command_runner.go:130] >   default                     busybox-5b5d89c9d6-48qkw                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	I0318 12:44:47.702700    5712 command_runner.go:130] >   kube-system                 coredns-5dd5756b68-fgn7v                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     25m
	I0318 12:44:47.702700    5712 command_runner.go:130] >   kube-system                 etcd-multinode-642600                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         76s
	I0318 12:44:47.702700    5712 command_runner.go:130] >   kube-system                 kindnet-kpt4f                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      25m
	I0318 12:44:47.702700    5712 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-642600             250m (12%)    0 (0%)      0 (0%)           0 (0%)         76s
	I0318 12:44:47.702700    5712 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-642600    200m (10%)    0 (0%)      0 (0%)           0 (0%)         25m
	I0318 12:44:47.702700    5712 command_runner.go:130] >   kube-system                 kube-proxy-4dg79                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
	I0318 12:44:47.702700    5712 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-642600             100m (5%)     0 (0%)      0 (0%)           0 (0%)         25m
	I0318 12:44:47.702700    5712 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
	I0318 12:44:47.702700    5712 command_runner.go:130] > Allocated resources:
	I0318 12:44:47.702700    5712 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Resource           Requests     Limits
	I0318 12:44:47.702700    5712 command_runner.go:130] >   --------           --------     ------
	I0318 12:44:47.702700    5712 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0318 12:44:47.702700    5712 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0318 12:44:47.702700    5712 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0318 12:44:47.702700    5712 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0318 12:44:47.702700    5712 command_runner.go:130] > Events:
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0318 12:44:47.702700    5712 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Normal  Starting                 25m                kube-proxy       
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Normal  Starting                 73s                kube-proxy       
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Normal  Starting                 26m                kubelet          Starting kubelet.
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Normal  NodeHasSufficientMemory  26m (x8 over 26m)  kubelet          Node multinode-642600 status is now: NodeHasSufficientMemory
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    26m (x8 over 26m)  kubelet          Node multinode-642600 status is now: NodeHasNoDiskPressure
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Normal  NodeHasSufficientPID     26m (x7 over 26m)  kubelet          Node multinode-642600 status is now: NodeHasSufficientPID
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Normal  NodeAllocatableEnforced  26m                kubelet          Updated Node Allocatable limit across pods
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Normal  Starting                 25m                kubelet          Starting kubelet.
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Normal  NodeHasSufficientPID     25m                kubelet          Node multinode-642600 status is now: NodeHasSufficientPID
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    25m                kubelet          Node multinode-642600 status is now: NodeHasNoDiskPressure
	I0318 12:44:47.703717    5712 command_runner.go:130] >   Normal  NodeAllocatableEnforced  25m                kubelet          Updated Node Allocatable limit across pods
	I0318 12:44:47.703717    5712 command_runner.go:130] >   Normal  NodeHasSufficientMemory  25m                kubelet          Node multinode-642600 status is now: NodeHasSufficientMemory
	I0318 12:44:47.703717    5712 command_runner.go:130] >   Normal  RegisteredNode           25m                node-controller  Node multinode-642600 event: Registered Node multinode-642600 in Controller
	I0318 12:44:47.703785    5712 command_runner.go:130] >   Normal  NodeReady                25m                kubelet          Node multinode-642600 status is now: NodeReady
	I0318 12:44:47.703785    5712 command_runner.go:130] >   Normal  Starting                 83s                kubelet          Starting kubelet.
	I0318 12:44:47.703785    5712 command_runner.go:130] >   Normal  NodeHasSufficientMemory  83s (x8 over 83s)  kubelet          Node multinode-642600 status is now: NodeHasSufficientMemory
	I0318 12:44:47.703785    5712 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    83s (x8 over 83s)  kubelet          Node multinode-642600 status is now: NodeHasNoDiskPressure
	I0318 12:44:47.703785    5712 command_runner.go:130] >   Normal  NodeHasSufficientPID     83s (x7 over 83s)  kubelet          Node multinode-642600 status is now: NodeHasSufficientPID
	I0318 12:44:47.703785    5712 command_runner.go:130] >   Normal  NodeAllocatableEnforced  83s                kubelet          Updated Node Allocatable limit across pods
	I0318 12:44:47.703785    5712 command_runner.go:130] >   Normal  RegisteredNode           64s                node-controller  Node multinode-642600 event: Registered Node multinode-642600 in Controller
	I0318 12:44:47.703785    5712 command_runner.go:130] > Name:               multinode-642600-m02
	I0318 12:44:47.703785    5712 command_runner.go:130] > Roles:              <none>
	I0318 12:44:47.703785    5712 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0318 12:44:47.703785    5712 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0318 12:44:47.703785    5712 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0318 12:44:47.703785    5712 command_runner.go:130] >                     kubernetes.io/hostname=multinode-642600-m02
	I0318 12:44:47.703785    5712 command_runner.go:130] >                     kubernetes.io/os=linux
	I0318 12:44:47.703785    5712 command_runner.go:130] >                     minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd
	I0318 12:44:47.703785    5712 command_runner.go:130] >                     minikube.k8s.io/name=multinode-642600
	I0318 12:44:47.703785    5712 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0318 12:44:47.703785    5712 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_18T12_22_13_0700
	I0318 12:44:47.703785    5712 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0318 12:44:47.703785    5712 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0318 12:44:47.704355    5712 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0318 12:44:47.704355    5712 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0318 12:44:47.704355    5712 command_runner.go:130] > CreationTimestamp:  Mon, 18 Mar 2024 12:22:12 +0000
	I0318 12:44:47.704355    5712 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0318 12:44:47.704355    5712 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0318 12:44:47.704355    5712 command_runner.go:130] > Unschedulable:      false
	I0318 12:44:47.704355    5712 command_runner.go:130] > Lease:
	I0318 12:44:47.704355    5712 command_runner.go:130] >   HolderIdentity:  multinode-642600-m02
	I0318 12:44:47.704355    5712 command_runner.go:130] >   AcquireTime:     <unset>
	I0318 12:44:47.704355    5712 command_runner.go:130] >   RenewTime:       Mon, 18 Mar 2024 12:40:15 +0000
	I0318 12:44:47.704355    5712 command_runner.go:130] > Conditions:
	I0318 12:44:47.704355    5712 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0318 12:44:47.704355    5712 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0318 12:44:47.704355    5712 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 18 Mar 2024 12:38:34 +0000   Mon, 18 Mar 2024 12:44:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:47.704355    5712 command_runner.go:130] >   DiskPressure     Unknown   Mon, 18 Mar 2024 12:38:34 +0000   Mon, 18 Mar 2024 12:44:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:47.704575    5712 command_runner.go:130] >   PIDPressure      Unknown   Mon, 18 Mar 2024 12:38:34 +0000   Mon, 18 Mar 2024 12:44:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:47.704575    5712 command_runner.go:130] >   Ready            Unknown   Mon, 18 Mar 2024 12:38:34 +0000   Mon, 18 Mar 2024 12:44:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:47.704575    5712 command_runner.go:130] > Addresses:
	I0318 12:44:47.704575    5712 command_runner.go:130] >   InternalIP:  172.25.159.102
	I0318 12:44:47.704575    5712 command_runner.go:130] >   Hostname:    multinode-642600-m02
	I0318 12:44:47.704575    5712 command_runner.go:130] > Capacity:
	I0318 12:44:47.704575    5712 command_runner.go:130] >   cpu:                2
	I0318 12:44:47.704575    5712 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 12:44:47.704687    5712 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 12:44:47.704687    5712 command_runner.go:130] >   memory:             2164268Ki
	I0318 12:44:47.704687    5712 command_runner.go:130] >   pods:               110
	I0318 12:44:47.704687    5712 command_runner.go:130] > Allocatable:
	I0318 12:44:47.704687    5712 command_runner.go:130] >   cpu:                2
	I0318 12:44:47.704687    5712 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 12:44:47.704687    5712 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 12:44:47.704763    5712 command_runner.go:130] >   memory:             2164268Ki
	I0318 12:44:47.704763    5712 command_runner.go:130] >   pods:               110
	I0318 12:44:47.704763    5712 command_runner.go:130] > System Info:
	I0318 12:44:47.704763    5712 command_runner.go:130] >   Machine ID:                 3840c114554e41ff9ded1410244d8aba
	I0318 12:44:47.704763    5712 command_runner.go:130] >   System UUID:                23dbf5b1-f940-4749-8caf-1ae12d869a30
	I0318 12:44:47.704763    5712 command_runner.go:130] >   Boot ID:                    9a3fcab5-beb6-4505-b112-82809850bba3
	I0318 12:44:47.704763    5712 command_runner.go:130] >   Kernel Version:             5.10.207
	I0318 12:44:47.704763    5712 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0318 12:44:47.704763    5712 command_runner.go:130] >   Operating System:           linux
	I0318 12:44:47.704881    5712 command_runner.go:130] >   Architecture:               amd64
	I0318 12:44:47.704881    5712 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0318 12:44:47.704881    5712 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0318 12:44:47.704881    5712 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0318 12:44:47.704881    5712 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0318 12:44:47.704936    5712 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0318 12:44:47.704936    5712 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0318 12:44:47.704936    5712 command_runner.go:130] >   Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0318 12:44:47.704936    5712 command_runner.go:130] >   ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	I0318 12:44:47.704999    5712 command_runner.go:130] >   default                     busybox-5b5d89c9d6-hmhdf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	I0318 12:44:47.704999    5712 command_runner.go:130] >   kube-system                 kindnet-d5llj               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      22m
	I0318 12:44:47.704999    5712 command_runner.go:130] >   kube-system                 kube-proxy-vts9f            0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	I0318 12:44:47.705053    5712 command_runner.go:130] > Allocated resources:
	I0318 12:44:47.705053    5712 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0318 12:44:47.705053    5712 command_runner.go:130] >   Resource           Requests   Limits
	I0318 12:44:47.705115    5712 command_runner.go:130] >   --------           --------   ------
	I0318 12:44:47.705115    5712 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0318 12:44:47.705115    5712 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0318 12:44:47.705115    5712 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0318 12:44:47.705115    5712 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0318 12:44:47.705115    5712 command_runner.go:130] > Events:
	I0318 12:44:47.705115    5712 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0318 12:44:47.705169    5712 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0318 12:44:47.705169    5712 command_runner.go:130] >   Normal  Starting                 22m                kube-proxy       
	I0318 12:44:47.705169    5712 command_runner.go:130] >   Normal  NodeHasSufficientMemory  22m (x5 over 22m)  kubelet          Node multinode-642600-m02 status is now: NodeHasSufficientMemory
	I0318 12:44:47.705169    5712 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    22m (x5 over 22m)  kubelet          Node multinode-642600-m02 status is now: NodeHasNoDiskPressure
	I0318 12:44:47.705228    5712 command_runner.go:130] >   Normal  NodeHasSufficientPID     22m (x5 over 22m)  kubelet          Node multinode-642600-m02 status is now: NodeHasSufficientPID
	I0318 12:44:47.705228    5712 command_runner.go:130] >   Normal  RegisteredNode           22m                node-controller  Node multinode-642600-m02 event: Registered Node multinode-642600-m02 in Controller
	I0318 12:44:47.705267    5712 command_runner.go:130] >   Normal  NodeReady                22m                kubelet          Node multinode-642600-m02 status is now: NodeReady
	I0318 12:44:47.705267    5712 command_runner.go:130] >   Normal  RegisteredNode           64s                node-controller  Node multinode-642600-m02 event: Registered Node multinode-642600-m02 in Controller
	I0318 12:44:47.705267    5712 command_runner.go:130] >   Normal  NodeNotReady             24s                node-controller  Node multinode-642600-m02 status is now: NodeNotReady
	I0318 12:44:47.705313    5712 command_runner.go:130] > Name:               multinode-642600-m03
	I0318 12:44:47.705313    5712 command_runner.go:130] > Roles:              <none>
	I0318 12:44:47.705313    5712 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0318 12:44:47.705313    5712 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0318 12:44:47.705313    5712 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0318 12:44:47.705369    5712 command_runner.go:130] >                     kubernetes.io/hostname=multinode-642600-m03
	I0318 12:44:47.705369    5712 command_runner.go:130] >                     kubernetes.io/os=linux
	I0318 12:44:47.705369    5712 command_runner.go:130] >                     minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd
	I0318 12:44:47.705369    5712 command_runner.go:130] >                     minikube.k8s.io/name=multinode-642600
	I0318 12:44:47.705369    5712 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0318 12:44:47.705369    5712 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_18T12_38_47_0700
	I0318 12:44:47.705369    5712 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0318 12:44:47.705369    5712 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0318 12:44:47.705369    5712 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0318 12:44:47.705492    5712 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0318 12:44:47.705492    5712 command_runner.go:130] > CreationTimestamp:  Mon, 18 Mar 2024 12:38:46 +0000
	I0318 12:44:47.705492    5712 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0318 12:44:47.705544    5712 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0318 12:44:47.705544    5712 command_runner.go:130] > Unschedulable:      false
	I0318 12:44:47.705544    5712 command_runner.go:130] > Lease:
	I0318 12:44:47.705544    5712 command_runner.go:130] >   HolderIdentity:  multinode-642600-m03
	I0318 12:44:47.705544    5712 command_runner.go:130] >   AcquireTime:     <unset>
	I0318 12:44:47.705544    5712 command_runner.go:130] >   RenewTime:       Mon, 18 Mar 2024 12:39:48 +0000
	I0318 12:44:47.705544    5712 command_runner.go:130] > Conditions:
	I0318 12:44:47.705602    5712 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0318 12:44:47.705602    5712 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0318 12:44:47.705602    5712 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 18 Mar 2024 12:38:52 +0000   Mon, 18 Mar 2024 12:40:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:47.705602    5712 command_runner.go:130] >   DiskPressure     Unknown   Mon, 18 Mar 2024 12:38:52 +0000   Mon, 18 Mar 2024 12:40:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:47.705728    5712 command_runner.go:130] >   PIDPressure      Unknown   Mon, 18 Mar 2024 12:38:52 +0000   Mon, 18 Mar 2024 12:40:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:47.705830    5712 command_runner.go:130] >   Ready            Unknown   Mon, 18 Mar 2024 12:38:52 +0000   Mon, 18 Mar 2024 12:40:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:47.705853    5712 command_runner.go:130] > Addresses:
	I0318 12:44:47.705853    5712 command_runner.go:130] >   InternalIP:  172.25.157.200
	I0318 12:44:47.705887    5712 command_runner.go:130] >   Hostname:    multinode-642600-m03
	I0318 12:44:47.705887    5712 command_runner.go:130] > Capacity:
	I0318 12:44:47.705887    5712 command_runner.go:130] >   cpu:                2
	I0318 12:44:47.705887    5712 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 12:44:47.705887    5712 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 12:44:47.705887    5712 command_runner.go:130] >   memory:             2164268Ki
	I0318 12:44:47.705887    5712 command_runner.go:130] >   pods:               110
	I0318 12:44:47.705887    5712 command_runner.go:130] > Allocatable:
	I0318 12:44:47.705887    5712 command_runner.go:130] >   cpu:                2
	I0318 12:44:47.705955    5712 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 12:44:47.705955    5712 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 12:44:47.705955    5712 command_runner.go:130] >   memory:             2164268Ki
	I0318 12:44:47.705955    5712 command_runner.go:130] >   pods:               110
	I0318 12:44:47.706011    5712 command_runner.go:130] > System Info:
	I0318 12:44:47.706011    5712 command_runner.go:130] >   Machine ID:                 b858c7f1c1bc42a69e1927ccc26ea5ce
	I0318 12:44:47.706011    5712 command_runner.go:130] >   System UUID:                8c4fd36f-ab8b-5447-9df2-542afafc5ab4
	I0318 12:44:47.706011    5712 command_runner.go:130] >   Boot ID:                    cea0ecfe-24ab-4614-a808-1e2a7a960f26
	I0318 12:44:47.706074    5712 command_runner.go:130] >   Kernel Version:             5.10.207
	I0318 12:44:47.706074    5712 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0318 12:44:47.706074    5712 command_runner.go:130] >   Operating System:           linux
	I0318 12:44:47.706074    5712 command_runner.go:130] >   Architecture:               amd64
	I0318 12:44:47.706074    5712 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0318 12:44:47.706074    5712 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0318 12:44:47.706074    5712 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0318 12:44:47.706128    5712 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0318 12:44:47.706128    5712 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0318 12:44:47.706128    5712 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0318 12:44:47.706128    5712 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0318 12:44:47.706128    5712 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0318 12:44:47.706128    5712 command_runner.go:130] >   kube-system                 kindnet-thkjp       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      17m
	I0318 12:44:47.706128    5712 command_runner.go:130] >   kube-system                 kube-proxy-khbjt    0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	I0318 12:44:47.706128    5712 command_runner.go:130] > Allocated resources:
	I0318 12:44:47.706128    5712 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0318 12:44:47.706128    5712 command_runner.go:130] >   Resource           Requests   Limits
	I0318 12:44:47.706128    5712 command_runner.go:130] >   --------           --------   ------
	I0318 12:44:47.706128    5712 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0318 12:44:47.706128    5712 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0318 12:44:47.706128    5712 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0318 12:44:47.706128    5712 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0318 12:44:47.706128    5712 command_runner.go:130] > Events:
	I0318 12:44:47.706128    5712 command_runner.go:130] >   Type    Reason                   Age                  From             Message
	I0318 12:44:47.706128    5712 command_runner.go:130] >   ----    ------                   ----                 ----             -------
	I0318 12:44:47.706128    5712 command_runner.go:130] >   Normal  Starting                 17m                  kube-proxy       
	I0318 12:44:47.706128    5712 command_runner.go:130] >   Normal  Starting                 5m58s                kube-proxy       
	I0318 12:44:47.706128    5712 command_runner.go:130] >   Normal  NodeHasSufficientMemory  17m (x5 over 17m)    kubelet          Node multinode-642600-m03 status is now: NodeHasSufficientMemory
	I0318 12:44:47.706128    5712 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    17m (x5 over 17m)    kubelet          Node multinode-642600-m03 status is now: NodeHasNoDiskPressure
	I0318 12:44:47.706128    5712 command_runner.go:130] >   Normal  NodeHasSufficientPID     17m (x5 over 17m)    kubelet          Node multinode-642600-m03 status is now: NodeHasSufficientPID
	I0318 12:44:47.706128    5712 command_runner.go:130] >   Normal  NodeReady                17m                  kubelet          Node multinode-642600-m03 status is now: NodeReady
	I0318 12:44:47.706128    5712 command_runner.go:130] >   Normal  Starting                 6m1s                 kubelet          Starting kubelet.
	I0318 12:44:47.706128    5712 command_runner.go:130] >   Normal  NodeHasSufficientMemory  6m1s (x2 over 6m1s)  kubelet          Node multinode-642600-m03 status is now: NodeHasSufficientMemory
	I0318 12:44:47.706128    5712 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    6m1s (x2 over 6m1s)  kubelet          Node multinode-642600-m03 status is now: NodeHasNoDiskPressure
	I0318 12:44:47.706128    5712 command_runner.go:130] >   Normal  NodeHasSufficientPID     6m1s (x2 over 6m1s)  kubelet          Node multinode-642600-m03 status is now: NodeHasSufficientPID
	I0318 12:44:47.706128    5712 command_runner.go:130] >   Normal  NodeAllocatableEnforced  6m1s                 kubelet          Updated Node Allocatable limit across pods
	I0318 12:44:47.706128    5712 command_runner.go:130] >   Normal  RegisteredNode           6m                   node-controller  Node multinode-642600-m03 event: Registered Node multinode-642600-m03 in Controller
	I0318 12:44:47.706128    5712 command_runner.go:130] >   Normal  NodeReady                5m55s                kubelet          Node multinode-642600-m03 status is now: NodeReady
	I0318 12:44:47.706128    5712 command_runner.go:130] >   Normal  NodeNotReady             4m14s                node-controller  Node multinode-642600-m03 status is now: NodeNotReady
	I0318 12:44:47.706128    5712 command_runner.go:130] >   Normal  RegisteredNode           64s                  node-controller  Node multinode-642600-m03 event: Registered Node multinode-642600-m03 in Controller
	I0318 12:44:47.718256    5712 logs.go:123] Gathering logs for kube-scheduler [47777d4c0b90] ...
	I0318 12:44:47.718256    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47777d4c0b90"
	I0318 12:44:47.752951    5712 command_runner.go:130] ! I0318 12:18:43.828879       1 serving.go:348] Generated self-signed cert in-memory
	I0318 12:44:47.752951    5712 command_runner.go:130] ! W0318 12:18:46.562226       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0318 12:44:47.752951    5712 command_runner.go:130] ! W0318 12:18:46.562618       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 12:44:47.752951    5712 command_runner.go:130] ! W0318 12:18:46.562705       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0318 12:44:47.752951    5712 command_runner.go:130] ! W0318 12:18:46.562793       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 12:44:47.752951    5712 command_runner.go:130] ! I0318 12:18:46.615857       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0318 12:44:47.752951    5712 command_runner.go:130] ! I0318 12:18:46.615957       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:47.752951    5712 command_runner.go:130] ! I0318 12:18:46.622177       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 12:44:47.752951    5712 command_runner.go:130] ! I0318 12:18:46.622201       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 12:44:47.752951    5712 command_runner.go:130] ! I0318 12:18:46.625084       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0318 12:44:47.752951    5712 command_runner.go:130] ! I0318 12:18:46.625162       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 12:44:47.752951    5712 command_runner.go:130] ! W0318 12:18:46.631110       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 12:44:47.752951    5712 command_runner.go:130] ! E0318 12:18:46.631164       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 12:44:47.752951    5712 command_runner.go:130] ! W0318 12:18:46.634891       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 12:44:47.752951    5712 command_runner.go:130] ! E0318 12:18:46.634917       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 12:44:47.752951    5712 command_runner.go:130] ! W0318 12:18:46.636313       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0318 12:44:47.752951    5712 command_runner.go:130] ! E0318 12:18:46.638655       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0318 12:44:47.752951    5712 command_runner.go:130] ! W0318 12:18:46.636730       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! E0318 12:18:46.639099       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! W0318 12:18:46.636905       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! E0318 12:18:46.639254       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! W0318 12:18:46.636986       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! E0318 12:18:46.639495       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! W0318 12:18:46.641683       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! E0318 12:18:46.641953       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! W0318 12:18:46.642236       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! E0318 12:18:46.642375       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! W0318 12:18:46.642673       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! W0318 12:18:46.646073       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! E0318 12:18:46.647270       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! W0318 12:18:46.646147       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! E0318 12:18:46.647534       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! W0318 12:18:46.646208       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! E0318 12:18:46.647719       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! W0318 12:18:46.646271       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! E0318 12:18:46.647738       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! W0318 12:18:46.646322       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! E0318 12:18:46.647752       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! E0318 12:18:46.647915       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! W0318 12:18:46.650301       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! E0318 12:18:46.650528       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! W0318 12:18:47.471960       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! E0318 12:18:47.472093       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! W0318 12:18:47.540921       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! E0318 12:18:47.541368       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! W0318 12:18:47.545171       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! E0318 12:18:47.546126       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! W0318 12:18:47.563772       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! E0318 12:18:47.563806       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! W0318 12:18:47.597770       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! E0318 12:18:47.597873       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! W0318 12:18:47.684794       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 12:44:47.754972    5712 command_runner.go:130] ! E0318 12:18:47.685008       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 12:44:47.754972    5712 command_runner.go:130] ! W0318 12:18:47.685352       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:47.755046    5712 command_runner.go:130] ! E0318 12:18:47.685509       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:47.755046    5712 command_runner.go:130] ! W0318 12:18:47.840132       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 12:44:47.755046    5712 command_runner.go:130] ! E0318 12:18:47.840303       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 12:44:47.755046    5712 command_runner.go:130] ! W0318 12:18:47.879838       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 12:44:47.755046    5712 command_runner.go:130] ! E0318 12:18:47.880363       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 12:44:47.755046    5712 command_runner.go:130] ! W0318 12:18:47.906171       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:47.755046    5712 command_runner.go:130] ! E0318 12:18:47.906493       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:47.755046    5712 command_runner.go:130] ! W0318 12:18:48.059997       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 12:44:47.755046    5712 command_runner.go:130] ! E0318 12:18:48.060049       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 12:44:47.755046    5712 command_runner.go:130] ! W0318 12:18:48.096160       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:47.755046    5712 command_runner.go:130] ! E0318 12:18:48.096304       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:47.755046    5712 command_runner.go:130] ! W0318 12:18:48.096504       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 12:44:47.755046    5712 command_runner.go:130] ! E0318 12:18:48.096662       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 12:44:47.755046    5712 command_runner.go:130] ! W0318 12:18:48.133175       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 12:44:47.755046    5712 command_runner.go:130] ! E0318 12:18:48.133469       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 12:44:47.755046    5712 command_runner.go:130] ! W0318 12:18:48.135066       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0318 12:44:47.755599    5712 command_runner.go:130] ! E0318 12:18:48.135196       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0318 12:44:47.755599    5712 command_runner.go:130] ! I0318 12:18:50.022459       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 12:44:47.755599    5712 command_runner.go:130] ! E0318 12:40:51.995231       1 run.go:74] "command failed" err="finished without leader elect"
	I0318 12:44:47.768181    5712 logs.go:123] Gathering logs for kindnet [9fec05a61d2a] ...
	I0318 12:44:47.768181    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fec05a61d2a"
	I0318 12:44:47.800186    5712 command_runner.go:130] ! I0318 12:43:33.429181       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0318 12:44:47.800323    5712 command_runner.go:130] ! I0318 12:43:33.431032       1 main.go:107] hostIP = 172.25.148.129
	I0318 12:44:47.800379    5712 command_runner.go:130] ! podIP = 172.25.148.129
	I0318 12:44:47.800379    5712 command_runner.go:130] ! I0318 12:43:33.432708       1 main.go:116] setting mtu 1500 for CNI 
	I0318 12:44:47.800379    5712 command_runner.go:130] ! I0318 12:43:33.432750       1 main.go:146] kindnetd IP family: "ipv4"
	I0318 12:44:47.800379    5712 command_runner.go:130] ! I0318 12:43:33.432773       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0318 12:44:47.800379    5712 command_runner.go:130] ! I0318 12:44:03.855331       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0318 12:44:47.800472    5712 command_runner.go:130] ! I0318 12:44:03.906638       1 main.go:223] Handling node with IPs: map[172.25.148.129:{}]
	I0318 12:44:47.800472    5712 command_runner.go:130] ! I0318 12:44:03.906763       1 main.go:227] handling current node
	I0318 12:44:47.800472    5712 command_runner.go:130] ! I0318 12:44:03.907280       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.800472    5712 command_runner.go:130] ! I0318 12:44:03.907371       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.800539    5712 command_runner.go:130] ! I0318 12:44:03.907763       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.25.159.102 Flags: [] Table: 0} 
	I0318 12:44:47.800539    5712 command_runner.go:130] ! I0318 12:44:03.907983       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:47.800539    5712 command_runner.go:130] ! I0318 12:44:03.907999       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:47.800539    5712 command_runner.go:130] ! I0318 12:44:03.908063       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.25.157.200 Flags: [] Table: 0} 
	I0318 12:44:47.800539    5712 command_runner.go:130] ! I0318 12:44:13.926166       1 main.go:223] Handling node with IPs: map[172.25.148.129:{}]
	I0318 12:44:47.800610    5712 command_runner.go:130] ! I0318 12:44:13.926260       1 main.go:227] handling current node
	I0318 12:44:47.800610    5712 command_runner.go:130] ! I0318 12:44:13.926281       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.800610    5712 command_runner.go:130] ! I0318 12:44:13.926377       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.800610    5712 command_runner.go:130] ! I0318 12:44:13.927231       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:47.800694    5712 command_runner.go:130] ! I0318 12:44:13.927364       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:47.800694    5712 command_runner.go:130] ! I0318 12:44:23.943396       1 main.go:223] Handling node with IPs: map[172.25.148.129:{}]
	I0318 12:44:47.800694    5712 command_runner.go:130] ! I0318 12:44:23.943437       1 main.go:227] handling current node
	I0318 12:44:47.800694    5712 command_runner.go:130] ! I0318 12:44:23.943450       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.800752    5712 command_runner.go:130] ! I0318 12:44:23.943456       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.800752    5712 command_runner.go:130] ! I0318 12:44:23.943816       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:47.800752    5712 command_runner.go:130] ! I0318 12:44:23.943956       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:47.800752    5712 command_runner.go:130] ! I0318 12:44:33.951114       1 main.go:223] Handling node with IPs: map[172.25.148.129:{}]
	I0318 12:44:47.800853    5712 command_runner.go:130] ! I0318 12:44:33.951215       1 main.go:227] handling current node
	I0318 12:44:47.800853    5712 command_runner.go:130] ! I0318 12:44:33.951232       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.800853    5712 command_runner.go:130] ! I0318 12:44:33.951241       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.800909    5712 command_runner.go:130] ! I0318 12:44:33.951807       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:47.800909    5712 command_runner.go:130] ! I0318 12:44:33.951927       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:47.800909    5712 command_runner.go:130] ! I0318 12:44:43.968530       1 main.go:223] Handling node with IPs: map[172.25.148.129:{}]
	I0318 12:44:47.800909    5712 command_runner.go:130] ! I0318 12:44:43.968658       1 main.go:227] handling current node
	I0318 12:44:47.800909    5712 command_runner.go:130] ! I0318 12:44:43.968737       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.800985    5712 command_runner.go:130] ! I0318 12:44:43.968990       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.800985    5712 command_runner.go:130] ! I0318 12:44:43.969485       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:47.800985    5712 command_runner.go:130] ! I0318 12:44:43.969715       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:47.805222    5712 logs.go:123] Gathering logs for kube-scheduler [bd1e4f4d262e] ...
	I0318 12:44:47.805291    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd1e4f4d262e"
	I0318 12:44:47.837374    5712 command_runner.go:130] ! I0318 12:43:27.649061       1 serving.go:348] Generated self-signed cert in-memory
	I0318 12:44:47.837871    5712 command_runner.go:130] ! W0318 12:43:30.548831       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0318 12:44:47.837871    5712 command_runner.go:130] ! W0318 12:43:30.549092       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 12:44:47.837871    5712 command_runner.go:130] ! W0318 12:43:30.549282       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0318 12:44:47.838003    5712 command_runner.go:130] ! W0318 12:43:30.549461       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 12:44:47.838003    5712 command_runner.go:130] ! I0318 12:43:30.613305       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0318 12:44:47.838003    5712 command_runner.go:130] ! I0318 12:43:30.613417       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:47.838003    5712 command_runner.go:130] ! I0318 12:43:30.618512       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 12:44:47.838003    5712 command_runner.go:130] ! I0318 12:43:30.619171       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0318 12:44:47.838003    5712 command_runner.go:130] ! I0318 12:43:30.619276       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 12:44:47.838003    5712 command_runner.go:130] ! I0318 12:43:30.620071       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 12:44:47.838003    5712 command_runner.go:130] ! I0318 12:43:30.720411       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 12:44:47.841047    5712 logs.go:123] Gathering logs for kube-proxy [4bbad08fe59a] ...
	I0318 12:44:47.841047    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbad08fe59a"
	I0318 12:44:47.873258    5712 command_runner.go:130] ! I0318 12:19:04.970720       1 server_others.go:69] "Using iptables proxy"
	I0318 12:44:47.873258    5712 command_runner.go:130] ! I0318 12:19:04.997380       1 node.go:141] Successfully retrieved node IP: 172.25.151.112
	I0318 12:44:47.873258    5712 command_runner.go:130] ! I0318 12:19:05.099028       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 12:44:47.873258    5712 command_runner.go:130] ! I0318 12:19:05.099065       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 12:44:47.873258    5712 command_runner.go:130] ! I0318 12:19:05.102885       1 server_others.go:152] "Using iptables Proxier"
	I0318 12:44:47.873258    5712 command_runner.go:130] ! I0318 12:19:05.103013       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 12:44:47.873258    5712 command_runner.go:130] ! I0318 12:19:05.103652       1 server.go:846] "Version info" version="v1.28.4"
	I0318 12:44:47.873258    5712 command_runner.go:130] ! I0318 12:19:05.103704       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:47.873258    5712 command_runner.go:130] ! I0318 12:19:05.105505       1 config.go:188] "Starting service config controller"
	I0318 12:44:47.873258    5712 command_runner.go:130] ! I0318 12:19:05.106093       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 12:44:47.873987    5712 command_runner.go:130] ! I0318 12:19:05.106131       1 config.go:97] "Starting endpoint slice config controller"
	I0318 12:44:47.873987    5712 command_runner.go:130] ! I0318 12:19:05.106138       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 12:44:47.873987    5712 command_runner.go:130] ! I0318 12:19:05.107424       1 config.go:315] "Starting node config controller"
	I0318 12:44:47.873987    5712 command_runner.go:130] ! I0318 12:19:05.107456       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 12:44:47.873987    5712 command_runner.go:130] ! I0318 12:19:05.206699       1 shared_informer.go:318] Caches are synced for service config
	I0318 12:44:47.873987    5712 command_runner.go:130] ! I0318 12:19:05.206811       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 12:44:47.873987    5712 command_runner.go:130] ! I0318 12:19:05.207857       1 shared_informer.go:318] Caches are synced for node config
	I0318 12:44:47.876492    5712 logs.go:123] Gathering logs for kube-apiserver [a48a6d310b86] ...
	I0318 12:44:47.876576    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a48a6d310b86"
	I0318 12:44:47.903035    5712 command_runner.go:130] ! I0318 12:43:26.873064       1 options.go:220] external host was not specified, using 172.25.148.129
	I0318 12:44:47.903035    5712 command_runner.go:130] ! I0318 12:43:26.879001       1 server.go:148] Version: v1.28.4
	I0318 12:44:47.903035    5712 command_runner.go:130] ! I0318 12:43:26.879883       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:27.623853       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:27.658081       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:27.658128       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:27.660963       1 instance.go:298] Using reconciler: lease
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:27.814829       1 handler.go:232] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:27.815233       1 genericapiserver.go:744] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:28.557814       1 handler.go:232] Adding GroupVersion  v1 to ResourceManager
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:28.558168       1 instance.go:709] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:29.283146       1 instance.go:709] API group "resource.k8s.io" is not enabled, skipping.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:29.346403       1 handler.go:232] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.360856       1 genericapiserver.go:744] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.360910       1 genericapiserver.go:744] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:29.361419       1 handler.go:232] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.361431       1 genericapiserver.go:744] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:29.362356       1 handler.go:232] Adding GroupVersion autoscaling v2 to ResourceManager
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:29.365115       1 handler.go:232] Adding GroupVersion autoscaling v1 to ResourceManager
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.365134       1 genericapiserver.go:744] Skipping API autoscaling/v2beta1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.365140       1 genericapiserver.go:744] Skipping API autoscaling/v2beta2 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:29.370774       1 handler.go:232] Adding GroupVersion batch v1 to ResourceManager
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.370809       1 genericapiserver.go:744] Skipping API batch/v1beta1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:29.375063       1 handler.go:232] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.375102       1 genericapiserver.go:744] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.375108       1 genericapiserver.go:744] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:29.375862       1 handler.go:232] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.375929       1 genericapiserver.go:744] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.375979       1 genericapiserver.go:744] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:29.376693       1 handler.go:232] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:29.384185       1 handler.go:232] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.384228       1 genericapiserver.go:744] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.384236       1 genericapiserver.go:744] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:29.385110       1 handler.go:232] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.385148       1 genericapiserver.go:744] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.385155       1 genericapiserver.go:744] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:29.388232       1 handler.go:232] Adding GroupVersion policy v1 to ResourceManager
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.388272       1 genericapiserver.go:744] Skipping API policy/v1beta1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:29.392835       1 handler.go:232] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.392872       1 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.392880       1 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:29.393504       1 handler.go:232] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.393628       1 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.393636       1 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:29.401801       1 handler.go:232] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.401838       1 genericapiserver.go:744] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.401846       1 genericapiserver.go:744] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:29.405508       1 handler.go:232] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:29.409452       1 handler.go:232] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta2 to ResourceManager
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.409492       1 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.409500       1 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:29.421682       1 handler.go:232] Adding GroupVersion apps v1 to ResourceManager
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.421819       1 genericapiserver.go:744] Skipping API apps/v1beta2 because it has no resources.
	I0318 12:44:47.905098    5712 command_runner.go:130] ! W0318 12:43:29.421829       1 genericapiserver.go:744] Skipping API apps/v1beta1 because it has no resources.
	I0318 12:44:47.905098    5712 command_runner.go:130] ! I0318 12:43:29.426368       1 handler.go:232] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0318 12:44:47.905098    5712 command_runner.go:130] ! W0318 12:43:29.426405       1 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:47.905098    5712 command_runner.go:130] ! W0318 12:43:29.426413       1 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:47.905098    5712 command_runner.go:130] ! I0318 12:43:29.427337       1 handler.go:232] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0318 12:44:47.905098    5712 command_runner.go:130] ! W0318 12:43:29.427376       1 genericapiserver.go:744] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:47.905098    5712 command_runner.go:130] ! I0318 12:43:29.459555       1 handler.go:232] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0318 12:44:47.905098    5712 command_runner.go:130] ! W0318 12:43:29.459595       1 genericapiserver.go:744] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:47.905098    5712 command_runner.go:130] ! I0318 12:43:30.367734       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 12:44:47.905098    5712 command_runner.go:130] ! I0318 12:43:30.367932       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 12:44:47.905331    5712 command_runner.go:130] ! I0318 12:43:30.368782       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0318 12:44:47.905331    5712 command_runner.go:130] ! I0318 12:43:30.370542       1 secure_serving.go:213] Serving securely on [::]:8443
	I0318 12:44:47.905331    5712 command_runner.go:130] ! I0318 12:43:30.370628       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 12:44:47.905331    5712 command_runner.go:130] ! I0318 12:43:30.371667       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0318 12:44:47.905331    5712 command_runner.go:130] ! I0318 12:43:30.372321       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0318 12:44:47.905474    5712 command_runner.go:130] ! I0318 12:43:30.372682       1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
	I0318 12:44:47.905474    5712 command_runner.go:130] ! I0318 12:43:30.373559       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0318 12:44:47.905474    5712 command_runner.go:130] ! I0318 12:43:30.373947       1 handler_discovery.go:412] Starting ResourceDiscoveryManager
	I0318 12:44:47.905474    5712 command_runner.go:130] ! I0318 12:43:30.374159       1 available_controller.go:423] Starting AvailableConditionController
	I0318 12:44:47.905474    5712 command_runner.go:130] ! I0318 12:43:30.374194       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0318 12:44:47.905474    5712 command_runner.go:130] ! I0318 12:43:30.374404       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0318 12:44:47.905474    5712 command_runner.go:130] ! I0318 12:43:30.374979       1 aggregator.go:164] waiting for initial CRD sync...
	I0318 12:44:47.905474    5712 command_runner.go:130] ! I0318 12:43:30.375087       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0318 12:44:47.905474    5712 command_runner.go:130] ! I0318 12:43:30.375452       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0318 12:44:47.905589    5712 command_runner.go:130] ! I0318 12:43:30.376837       1 controller.go:116] Starting legacy_token_tracking_controller
	I0318 12:44:47.905589    5712 command_runner.go:130] ! I0318 12:43:30.377105       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I0318 12:44:47.905589    5712 command_runner.go:130] ! I0318 12:43:30.377485       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0318 12:44:47.905589    5712 command_runner.go:130] ! I0318 12:43:30.378013       1 controller.go:78] Starting OpenAPI AggregationController
	I0318 12:44:47.905589    5712 command_runner.go:130] ! I0318 12:43:30.378732       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0318 12:44:47.905704    5712 command_runner.go:130] ! I0318 12:43:30.379224       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0318 12:44:47.905837    5712 command_runner.go:130] ! I0318 12:43:30.379834       1 apf_controller.go:372] Starting API Priority and Fairness config controller
	I0318 12:44:47.905837    5712 command_runner.go:130] ! I0318 12:43:30.380470       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 12:44:47.905837    5712 command_runner.go:130] ! I0318 12:43:30.380848       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 12:44:47.905837    5712 command_runner.go:130] ! I0318 12:43:30.382047       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0318 12:44:47.905837    5712 command_runner.go:130] ! I0318 12:43:30.382230       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0318 12:44:47.905837    5712 command_runner.go:130] ! I0318 12:43:30.383964       1 controller.go:134] Starting OpenAPI controller
	I0318 12:44:47.905837    5712 command_runner.go:130] ! I0318 12:43:30.384158       1 controller.go:85] Starting OpenAPI V3 controller
	I0318 12:44:47.906006    5712 command_runner.go:130] ! I0318 12:43:30.384420       1 naming_controller.go:291] Starting NamingConditionController
	I0318 12:44:47.906006    5712 command_runner.go:130] ! I0318 12:43:30.384790       1 establishing_controller.go:76] Starting EstablishingController
	I0318 12:44:47.906006    5712 command_runner.go:130] ! I0318 12:43:30.385986       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0318 12:44:47.906006    5712 command_runner.go:130] ! I0318 12:43:30.386163       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0318 12:44:47.906006    5712 command_runner.go:130] ! I0318 12:43:30.386327       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0318 12:44:47.906115    5712 command_runner.go:130] ! I0318 12:43:30.474963       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0318 12:44:47.906115    5712 command_runner.go:130] ! I0318 12:43:30.476622       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0318 12:44:47.906115    5712 command_runner.go:130] ! I0318 12:43:30.496736       1 shared_informer.go:318] Caches are synced for configmaps
	I0318 12:44:47.906250    5712 command_runner.go:130] ! I0318 12:43:30.497067       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0318 12:44:47.906250    5712 command_runner.go:130] ! I0318 12:43:30.497511       1 aggregator.go:166] initial CRD sync complete...
	I0318 12:44:47.906250    5712 command_runner.go:130] ! I0318 12:43:30.498503       1 autoregister_controller.go:141] Starting autoregister controller
	I0318 12:44:47.906250    5712 command_runner.go:130] ! I0318 12:43:30.498662       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0318 12:44:47.906250    5712 command_runner.go:130] ! I0318 12:43:30.498825       1 cache.go:39] Caches are synced for autoregister controller
	I0318 12:44:47.906250    5712 command_runner.go:130] ! I0318 12:43:30.570075       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0318 12:44:47.906250    5712 command_runner.go:130] ! I0318 12:43:30.585880       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0318 12:44:47.906250    5712 command_runner.go:130] ! I0318 12:43:30.624565       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0318 12:44:47.906366    5712 command_runner.go:130] ! I0318 12:43:30.681515       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0318 12:44:47.906366    5712 command_runner.go:130] ! I0318 12:43:30.681604       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0318 12:44:47.906366    5712 command_runner.go:130] ! I0318 12:43:31.410513       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0318 12:44:47.906366    5712 command_runner.go:130] ! W0318 12:43:31.917736       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.25.148.129 172.25.151.112]
	I0318 12:44:47.906366    5712 command_runner.go:130] ! I0318 12:43:31.919293       1 controller.go:624] quota admission added evaluator for: endpoints
	I0318 12:44:47.906366    5712 command_runner.go:130] ! I0318 12:43:31.929122       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0318 12:44:47.906366    5712 command_runner.go:130] ! I0318 12:43:34.160688       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0318 12:44:47.906481    5712 command_runner.go:130] ! I0318 12:43:34.367742       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0318 12:44:47.906481    5712 command_runner.go:130] ! I0318 12:43:34.406080       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0318 12:44:47.906481    5712 command_runner.go:130] ! I0318 12:43:34.542647       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0318 12:44:47.906481    5712 command_runner.go:130] ! I0318 12:43:34.562855       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0318 12:44:47.906481    5712 command_runner.go:130] ! W0318 12:43:51.920595       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.25.148.129]
	I0318 12:44:47.915404    5712 logs.go:123] Gathering logs for coredns [fcf17db92b35] ...
	I0318 12:44:47.915445    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcf17db92b35"
	I0318 12:44:47.947272    5712 command_runner.go:130] > .:53
	I0318 12:44:47.947376    5712 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 07d6393480c36cc6b464d3853a5e32028517fcba50e93adef34ce624ca099b3a1e269a86e99bf5086a15610de9e11b2980c233f8d3dcbff38f702488f0fd5328
	I0318 12:44:47.947376    5712 command_runner.go:130] > CoreDNS-1.10.1
	I0318 12:44:47.947376    5712 command_runner.go:130] > linux/amd64, go1.20, 055b2c3
	I0318 12:44:47.947376    5712 command_runner.go:130] > [INFO] 127.0.0.1:53681 - 55845 "HINFO IN 162544917519141994.8165783507281513505. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.028223444s
	I0318 12:44:47.947623    5712 logs.go:123] Gathering logs for coredns [e81f1d2fdb36] ...
	I0318 12:44:47.947753    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e81f1d2fdb36"
	I0318 12:44:47.983213    5712 command_runner.go:130] > .:53
	I0318 12:44:47.983213    5712 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 07d6393480c36cc6b464d3853a5e32028517fcba50e93adef34ce624ca099b3a1e269a86e99bf5086a15610de9e11b2980c233f8d3dcbff38f702488f0fd5328
	I0318 12:44:47.983213    5712 command_runner.go:130] > CoreDNS-1.10.1
	I0318 12:44:47.983213    5712 command_runner.go:130] > linux/amd64, go1.20, 055b2c3
	I0318 12:44:47.983213    5712 command_runner.go:130] > [INFO] 127.0.0.1:48183 - 41539 "HINFO IN 767578685007701398.8900982300391989616. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.040167772s
	I0318 12:44:47.983213    5712 command_runner.go:130] > [INFO] 10.244.0.3:56190 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000320901s
	I0318 12:44:47.983213    5712 command_runner.go:130] > [INFO] 10.244.0.3:43050 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.04023503s
	I0318 12:44:47.983213    5712 command_runner.go:130] > [INFO] 10.244.0.3:47302 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.158419612s
	I0318 12:44:47.984292    5712 command_runner.go:130] > [INFO] 10.244.0.3:37199 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.162590352s
	I0318 12:44:47.984292    5712 command_runner.go:130] > [INFO] 10.244.1.2:48003 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000216101s
	I0318 12:44:47.984359    5712 command_runner.go:130] > [INFO] 10.244.1.2:48857 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000380201s
	I0318 12:44:47.984465    5712 command_runner.go:130] > [INFO] 10.244.1.2:52412 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000070401s
	I0318 12:44:47.984567    5712 command_runner.go:130] > [INFO] 10.244.1.2:59362 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000071801s
	I0318 12:44:47.984567    5712 command_runner.go:130] > [INFO] 10.244.0.3:38833 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000250501s
	I0318 12:44:47.984567    5712 command_runner.go:130] > [INFO] 10.244.0.3:34860 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.064163607s
	I0318 12:44:47.984659    5712 command_runner.go:130] > [INFO] 10.244.0.3:45210 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000227601s
	I0318 12:44:47.984659    5712 command_runner.go:130] > [INFO] 10.244.0.3:32804 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001229s
	I0318 12:44:47.984659    5712 command_runner.go:130] > [INFO] 10.244.0.3:44904 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.01563145s
	I0318 12:44:47.984659    5712 command_runner.go:130] > [INFO] 10.244.0.3:34958 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002035s
	I0318 12:44:47.984750    5712 command_runner.go:130] > [INFO] 10.244.0.3:59094 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001507s
	I0318 12:44:47.984750    5712 command_runner.go:130] > [INFO] 10.244.0.3:39370 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000181001s
	I0318 12:44:47.984750    5712 command_runner.go:130] > [INFO] 10.244.1.2:40318 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000302101s
	I0318 12:44:47.984750    5712 command_runner.go:130] > [INFO] 10.244.1.2:43523 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001489s
	I0318 12:44:47.984750    5712 command_runner.go:130] > [INFO] 10.244.1.2:47882 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001346s
	I0318 12:44:47.984842    5712 command_runner.go:130] > [INFO] 10.244.1.2:38222 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000057401s
	I0318 12:44:47.984842    5712 command_runner.go:130] > [INFO] 10.244.1.2:49068 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001253s
	I0318 12:44:47.984842    5712 command_runner.go:130] > [INFO] 10.244.1.2:35375 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000582s
	I0318 12:44:47.984842    5712 command_runner.go:130] > [INFO] 10.244.1.2:40933 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000179201s
	I0318 12:44:47.984842    5712 command_runner.go:130] > [INFO] 10.244.1.2:36014 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0002051s
	I0318 12:44:47.984933    5712 command_runner.go:130] > [INFO] 10.244.0.3:37733 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000265401s
	I0318 12:44:47.984933    5712 command_runner.go:130] > [INFO] 10.244.0.3:52912 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148001s
	I0318 12:44:47.984933    5712 command_runner.go:130] > [INFO] 10.244.0.3:33147 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000143701s
	I0318 12:44:47.984933    5712 command_runner.go:130] > [INFO] 10.244.0.3:49893 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000536s
	I0318 12:44:47.984933    5712 command_runner.go:130] > [INFO] 10.244.1.2:42681 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001221s
	I0318 12:44:47.985022    5712 command_runner.go:130] > [INFO] 10.244.1.2:41416 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143s
	I0318 12:44:47.985022    5712 command_runner.go:130] > [INFO] 10.244.1.2:58254 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000241501s
	I0318 12:44:47.985022    5712 command_runner.go:130] > [INFO] 10.244.1.2:35844 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000197201s
	I0318 12:44:47.985113    5712 command_runner.go:130] > [INFO] 10.244.0.3:33559 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102201s
	I0318 12:44:47.985113    5712 command_runner.go:130] > [INFO] 10.244.0.3:53963 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000158701s
	I0318 12:44:47.985113    5712 command_runner.go:130] > [INFO] 10.244.0.3:41406 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001297s
	I0318 12:44:47.985113    5712 command_runner.go:130] > [INFO] 10.244.0.3:34685 - 5 "PTR IN 1.144.25.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000264001s
	I0318 12:44:47.985113    5712 command_runner.go:130] > [INFO] 10.244.1.2:43312 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001178s
	I0318 12:44:47.985113    5712 command_runner.go:130] > [INFO] 10.244.1.2:55281 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000235501s
	I0318 12:44:47.985113    5712 command_runner.go:130] > [INFO] 10.244.1.2:34710 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000874s
	I0318 12:44:47.985113    5712 command_runner.go:130] > [INFO] 10.244.1.2:57686 - 5 "PTR IN 1.144.25.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000557s
	I0318 12:44:47.985113    5712 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0318 12:44:47.985113    5712 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0318 12:44:47.988652    5712 logs.go:123] Gathering logs for etcd [8e7911b58c58] ...
	I0318 12:44:47.988708    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7911b58c58"
	I0318 12:44:48.020960    5712 command_runner.go:130] ! {"level":"warn","ts":"2024-03-18T12:43:26.200481Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0318 12:44:48.021471    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.210029Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.25.148.129:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.25.148.129:2380","--initial-cluster=multinode-642600=https://172.25.148.129:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.25.148.129:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.25.148.129:2380","--name=multinode-642600","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0318 12:44:48.021471    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.210181Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0318 12:44:48.021543    5712 command_runner.go:130] ! {"level":"warn","ts":"2024-03-18T12:43:26.21031Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0318 12:44:48.021543    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.210331Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.25.148.129:2380"]}
	I0318 12:44:48.021618    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.210546Z","caller":"embed/etcd.go:495","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0318 12:44:48.021700    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.222773Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.25.148.129:2379"]}
	I0318 12:44:48.021860    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.228178Z","caller":"embed/etcd.go:309","msg":"starting an etcd server","etcd-version":"3.5.9","git-sha":"bdbbde998","go-version":"go1.19.9","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-642600","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.25.148.129:2380"],"listen-peer-urls":["https://172.25.148.129:2380"],"advertise-client-urls":["https://172.25.148.129:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.25.148.129:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"init
ial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0318 12:44:48.021860    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.271498Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"41.739133ms"}
	I0318 12:44:48.021860    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.299465Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0318 12:44:48.021860    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.319578Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"31713adf8492fbc4","local-member-id":"78764271becab2d0","commit-index":2138}
	I0318 12:44:48.021860    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.319995Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 switched to configuration voters=()"}
	I0318 12:44:48.021860    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.320107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 became follower at term 2"}
	I0318 12:44:48.021860    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.320138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 78764271becab2d0 [peers: [], term: 2, commit: 2138, applied: 0, lastindex: 2138, lastterm: 2]"}
	I0318 12:44:48.021860    5712 command_runner.go:130] ! {"level":"warn","ts":"2024-03-18T12:43:26.325366Z","caller":"auth/store.go:1238","msg":"simple token is not cryptographically signed"}
	I0318 12:44:48.021860    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.329191Z","caller":"mvcc/kvstore.go:323","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1388}
	I0318 12:44:48.021860    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.333388Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":1848}
	I0318 12:44:48.021860    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.357951Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0318 12:44:48.021860    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.372436Z","caller":"etcdserver/corrupt.go:95","msg":"starting initial corruption check","local-member-id":"78764271becab2d0","timeout":"7s"}
	I0318 12:44:48.021860    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.373126Z","caller":"etcdserver/corrupt.go:165","msg":"initial corruption checking passed; no corruption","local-member-id":"78764271becab2d0"}
	I0318 12:44:48.021860    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.373252Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"78764271becab2d0","local-server-version":"3.5.9","cluster-version":"to_be_decided"}
	I0318 12:44:48.021860    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.373688Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	I0318 12:44:48.021860    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.375391Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0318 12:44:48.021860    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.375647Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0318 12:44:48.021860    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.375735Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0318 12:44:48.021860    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.377469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 switched to configuration voters=(8680198388102902480)"}
	I0318 12:44:48.021860    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.377568Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"31713adf8492fbc4","local-member-id":"78764271becab2d0","added-peer-id":"78764271becab2d0","added-peer-peer-urls":["https://172.25.151.112:2380"]}
	I0318 12:44:48.021860    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.378749Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"31713adf8492fbc4","local-member-id":"78764271becab2d0","cluster-version":"3.5"}
	I0318 12:44:48.021860    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.378942Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0318 12:44:48.022416    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.380244Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0318 12:44:48.022489    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.380886Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"78764271becab2d0","initial-advertise-peer-urls":["https://172.25.148.129:2380"],"listen-peer-urls":["https://172.25.148.129:2380"],"advertise-client-urls":["https://172.25.148.129:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.25.148.129:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0318 12:44:48.022489    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.383141Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.25.148.129:2380"}
	I0318 12:44:48.022489    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.383279Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.25.148.129:2380"}
	I0318 12:44:48.022489    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.393018Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0318 12:44:48.022582    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.621966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 is starting a new election at term 2"}
	I0318 12:44:48.022582    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.622399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 became pre-candidate at term 2"}
	I0318 12:44:48.022647    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.622624Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 received MsgPreVoteResp from 78764271becab2d0 at term 2"}
	I0318 12:44:48.022647    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.622825Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 became candidate at term 3"}
	I0318 12:44:48.022719    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.624231Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 received MsgVoteResp from 78764271becab2d0 at term 3"}
	I0318 12:44:48.022719    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.624426Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 became leader at term 3"}
	I0318 12:44:48.022719    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.624696Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 78764271becab2d0 elected leader 78764271becab2d0 at term 3"}
	I0318 12:44:48.022790    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.641347Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"78764271becab2d0","local-member-attributes":"{Name:multinode-642600 ClientURLs:[https://172.25.148.129:2379]}","request-path":"/0/members/78764271becab2d0/attributes","cluster-id":"31713adf8492fbc4","publish-timeout":"7s"}
	I0318 12:44:48.022790    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.641882Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0318 12:44:48.022841    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.64409Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0318 12:44:48.022894    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.644373Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0318 12:44:48.022894    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.641995Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0318 12:44:48.022931    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.650212Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.25.148.129:2379"}
	I0318 12:44:48.022968    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.651053Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0318 12:44:48.032306    5712 logs.go:123] Gathering logs for kube-proxy [575b41a3a85a] ...
	I0318 12:44:48.032306    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 575b41a3a85a"
	I0318 12:44:48.097346    5712 command_runner.go:130] ! I0318 12:43:33.336778       1 server_others.go:69] "Using iptables proxy"
	I0318 12:44:48.097346    5712 command_runner.go:130] ! I0318 12:43:33.550433       1 node.go:141] Successfully retrieved node IP: 172.25.148.129
	I0318 12:44:48.097346    5712 command_runner.go:130] ! I0318 12:43:33.793084       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 12:44:48.097879    5712 command_runner.go:130] ! I0318 12:43:33.793109       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 12:44:48.097879    5712 command_runner.go:130] ! I0318 12:43:33.796954       1 server_others.go:152] "Using iptables Proxier"
	I0318 12:44:48.097879    5712 command_runner.go:130] ! I0318 12:43:33.798936       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 12:44:48.097879    5712 command_runner.go:130] ! I0318 12:43:33.800347       1 server.go:846] "Version info" version="v1.28.4"
	I0318 12:44:48.097948    5712 command_runner.go:130] ! I0318 12:43:33.800569       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:48.097948    5712 command_runner.go:130] ! I0318 12:43:33.803648       1 config.go:188] "Starting service config controller"
	I0318 12:44:48.097948    5712 command_runner.go:130] ! I0318 12:43:33.805156       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 12:44:48.097948    5712 command_runner.go:130] ! I0318 12:43:33.805421       1 config.go:97] "Starting endpoint slice config controller"
	I0318 12:44:48.098019    5712 command_runner.go:130] ! I0318 12:43:33.805584       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 12:44:48.098019    5712 command_runner.go:130] ! I0318 12:43:33.808628       1 config.go:315] "Starting node config controller"
	I0318 12:44:48.098019    5712 command_runner.go:130] ! I0318 12:43:33.808736       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 12:44:48.098019    5712 command_runner.go:130] ! I0318 12:43:33.905580       1 shared_informer.go:318] Caches are synced for service config
	I0318 12:44:48.098019    5712 command_runner.go:130] ! I0318 12:43:33.907041       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 12:44:48.098019    5712 command_runner.go:130] ! I0318 12:43:33.909416       1 shared_informer.go:318] Caches are synced for node config
	I0318 12:44:48.100561    5712 logs.go:123] Gathering logs for kube-controller-manager [a54be4436901] ...
	I0318 12:44:48.100617    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54be4436901"
	I0318 12:44:48.137560    5712 command_runner.go:130] ! I0318 12:18:43.818653       1 serving.go:348] Generated self-signed cert in-memory
	I0318 12:44:48.139082    5712 command_runner.go:130] ! I0318 12:18:45.050029       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0318 12:44:48.139136    5712 command_runner.go:130] ! I0318 12:18:45.050365       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:48.139136    5712 command_runner.go:130] ! I0318 12:18:45.053707       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0318 12:44:48.139136    5712 command_runner.go:130] ! I0318 12:18:45.056733       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 12:44:48.139136    5712 command_runner.go:130] ! I0318 12:18:45.057073       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 12:44:48.139136    5712 command_runner.go:130] ! I0318 12:18:45.057232       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 12:44:48.139136    5712 command_runner.go:130] ! I0318 12:18:49.569825       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0318 12:44:48.139136    5712 command_runner.go:130] ! I0318 12:18:49.602388       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0318 12:44:48.139136    5712 command_runner.go:130] ! I0318 12:18:49.603663       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0318 12:44:48.139136    5712 command_runner.go:130] ! I0318 12:18:49.603680       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0318 12:44:48.139290    5712 command_runner.go:130] ! I0318 12:18:49.621364       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0318 12:44:48.139290    5712 command_runner.go:130] ! I0318 12:18:49.621624       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 12:44:48.139348    5712 command_runner.go:130] ! I0318 12:18:49.621432       1 controllermanager.go:642] "Started controller" controller="garbage-collector-controller"
	I0318 12:44:48.139348    5712 command_runner.go:130] ! I0318 12:18:49.622281       1 graph_builder.go:294] "Running" component="GraphBuilder"
	I0318 12:44:48.139348    5712 command_runner.go:130] ! I0318 12:18:49.644362       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I0318 12:44:48.139398    5712 command_runner.go:130] ! I0318 12:18:49.644758       1 stateful_set.go:161] "Starting stateful set controller"
	I0318 12:44:48.139398    5712 command_runner.go:130] ! I0318 12:18:49.646607       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0318 12:44:48.139437    5712 command_runner.go:130] ! I0318 12:18:49.660400       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0318 12:44:48.139461    5712 command_runner.go:130] ! I0318 12:18:49.661053       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0318 12:44:48.139461    5712 command_runner.go:130] ! I0318 12:18:49.670023       1 shared_informer.go:318] Caches are synced for tokens
	I0318 12:44:48.139507    5712 command_runner.go:130] ! I0318 12:18:49.679784       1 controllermanager.go:642] "Started controller" controller="persistentvolume-expander-controller"
	I0318 12:44:48.139507    5712 command_runner.go:130] ! I0318 12:18:49.680015       1 expand_controller.go:328] "Starting expand controller"
	I0318 12:44:48.139507    5712 command_runner.go:130] ! I0318 12:18:49.680028       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0318 12:44:48.139507    5712 command_runner.go:130] ! I0318 12:18:49.692925       1 controllermanager.go:642] "Started controller" controller="clusterrole-aggregation-controller"
	I0318 12:44:48.139575    5712 command_runner.go:130] ! I0318 12:18:49.693164       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0318 12:44:48.139575    5712 command_runner.go:130] ! I0318 12:18:49.693449       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0318 12:44:48.139575    5712 command_runner.go:130] ! I0318 12:18:49.727464       1 controllermanager.go:642] "Started controller" controller="serviceaccount-controller"
	I0318 12:44:48.139575    5712 command_runner.go:130] ! I0318 12:18:49.727835       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0318 12:44:48.139642    5712 command_runner.go:130] ! I0318 12:18:49.727848       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0318 12:44:48.139642    5712 command_runner.go:130] ! I0318 12:18:49.742409       1 controllermanager.go:642] "Started controller" controller="disruption-controller"
	I0318 12:44:48.139701    5712 command_runner.go:130] ! I0318 12:18:49.743029       1 disruption.go:433] "Sending events to api server."
	I0318 12:44:48.139701    5712 command_runner.go:130] ! I0318 12:18:49.743301       1 disruption.go:444] "Starting disruption controller"
	I0318 12:44:48.139701    5712 command_runner.go:130] ! I0318 12:18:49.743449       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0318 12:44:48.139701    5712 command_runner.go:130] ! I0318 12:18:49.759716       1 controllermanager.go:642] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0318 12:44:48.139766    5712 command_runner.go:130] ! I0318 12:18:49.760338       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0318 12:44:48.139766    5712 command_runner.go:130] ! I0318 12:18:49.760376       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0318 12:44:48.139766    5712 command_runner.go:130] ! I0318 12:18:49.829809       1 controllermanager.go:642] "Started controller" controller="endpointslice-mirroring-controller"
	I0318 12:44:48.139824    5712 command_runner.go:130] ! I0318 12:18:49.830343       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0318 12:44:48.139824    5712 command_runner.go:130] ! I0318 12:18:49.830415       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0318 12:44:48.139888    5712 command_runner.go:130] ! I0318 12:18:50.085725       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I0318 12:44:48.139888    5712 command_runner.go:130] ! I0318 12:18:50.086016       1 namespace_controller.go:197] "Starting namespace controller"
	I0318 12:44:48.139888    5712 command_runner.go:130] ! I0318 12:18:50.086167       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0318 12:44:48.139947    5712 command_runner.go:130] ! I0318 12:18:50.234974       1 controllermanager.go:642] "Started controller" controller="replicaset-controller"
	I0318 12:44:48.139947    5712 command_runner.go:130] ! I0318 12:18:50.242121       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0318 12:44:48.139947    5712 command_runner.go:130] ! I0318 12:18:50.242138       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0318 12:44:48.139947    5712 command_runner.go:130] ! I0318 12:18:50.384031       1 controllermanager.go:642] "Started controller" controller="token-cleaner-controller"
	I0318 12:44:48.140012    5712 command_runner.go:130] ! I0318 12:18:50.384090       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0318 12:44:48.140012    5712 command_runner.go:130] ! I0318 12:18:50.384100       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0318 12:44:48.140012    5712 command_runner.go:130] ! I0318 12:18:50.384108       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0318 12:44:48.140072    5712 command_runner.go:130] ! I0318 12:18:50.530182       1 controllermanager.go:642] "Started controller" controller="persistentvolume-protection-controller"
	I0318 12:44:48.140072    5712 command_runner.go:130] ! I0318 12:18:50.530258       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0318 12:44:48.140072    5712 command_runner.go:130] ! I0318 12:18:50.530267       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0318 12:44:48.140072    5712 command_runner.go:130] ! I0318 12:18:50.695232       1 controllermanager.go:642] "Started controller" controller="job-controller"
	I0318 12:44:48.140072    5712 command_runner.go:130] ! I0318 12:18:50.695351       1 job_controller.go:226] "Starting job controller"
	I0318 12:44:48.140137    5712 command_runner.go:130] ! I0318 12:18:50.695361       1 shared_informer.go:311] Waiting for caches to sync for job
	I0318 12:44:48.140137    5712 command_runner.go:130] ! I0318 12:18:50.833418       1 controllermanager.go:642] "Started controller" controller="deployment-controller"
	I0318 12:44:48.140137    5712 command_runner.go:130] ! I0318 12:18:50.833674       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0318 12:44:48.140195    5712 command_runner.go:130] ! I0318 12:18:50.833686       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0318 12:44:48.140195    5712 command_runner.go:130] ! I0318 12:18:50.998838       1 controllermanager.go:642] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0318 12:44:48.140195    5712 command_runner.go:130] ! I0318 12:18:50.999193       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0318 12:44:48.140258    5712 command_runner.go:130] ! I0318 12:18:50.999227       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0318 12:44:48.140258    5712 command_runner.go:130] ! I0318 12:18:51.141445       1 controllermanager.go:642] "Started controller" controller="ttl-after-finished-controller"
	I0318 12:44:48.140258    5712 command_runner.go:130] ! I0318 12:18:51.141508       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0318 12:44:48.140316    5712 command_runner.go:130] ! I0318 12:18:51.141518       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0318 12:44:48.140316    5712 command_runner.go:130] ! I0318 12:18:51.279642       1 controllermanager.go:642] "Started controller" controller="pod-garbage-collector-controller"
	I0318 12:44:48.140316    5712 command_runner.go:130] ! I0318 12:18:51.279728       1 gc_controller.go:101] "Starting GC controller"
	I0318 12:44:48.140667    5712 command_runner.go:130] ! I0318 12:18:51.279742       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0318 12:44:48.141156    5712 command_runner.go:130] ! I0318 12:18:51.429394       1 controllermanager.go:642] "Started controller" controller="cronjob-controller"
	I0318 12:44:48.141229    5712 command_runner.go:130] ! I0318 12:18:51.429600       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0318 12:44:48.141229    5712 command_runner.go:130] ! I0318 12:18:51.429612       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0318 12:44:48.141285    5712 command_runner.go:130] ! I0318 12:19:01.598915       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0318 12:44:48.141285    5712 command_runner.go:130] ! I0318 12:19:01.598966       1 controllermanager.go:642] "Started controller" controller="node-ipam-controller"
	I0318 12:44:48.141321    5712 command_runner.go:130] ! I0318 12:19:01.599163       1 node_ipam_controller.go:162] "Starting ipam controller"
	I0318 12:44:48.141321    5712 command_runner.go:130] ! I0318 12:19:01.599174       1 shared_informer.go:311] Waiting for caches to sync for node
	I0318 12:44:48.141321    5712 command_runner.go:130] ! I0318 12:19:01.601488       1 node_lifecycle_controller.go:431] "Controller will reconcile labels"
	I0318 12:44:48.141321    5712 command_runner.go:130] ! I0318 12:19:01.601803       1 controllermanager.go:642] "Started controller" controller="node-lifecycle-controller"
	I0318 12:44:48.141394    5712 command_runner.go:130] ! I0318 12:19:01.601987       1 node_lifecycle_controller.go:465] "Sending events to api server"
	I0318 12:44:48.141394    5712 command_runner.go:130] ! I0318 12:19:01.602013       1 node_lifecycle_controller.go:476] "Starting node controller"
	I0318 12:44:48.141394    5712 command_runner.go:130] ! I0318 12:19:01.602019       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0318 12:44:48.141460    5712 command_runner.go:130] ! I0318 12:19:01.623744       1 controllermanager.go:642] "Started controller" controller="persistentvolume-binder-controller"
	I0318 12:44:48.141460    5712 command_runner.go:130] ! I0318 12:19:01.624435       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0318 12:44:48.141460    5712 command_runner.go:130] ! I0318 12:19:01.624966       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0318 12:44:48.141530    5712 command_runner.go:130] ! I0318 12:19:01.663430       1 controllermanager.go:642] "Started controller" controller="endpointslice-controller"
	I0318 12:44:48.141530    5712 command_runner.go:130] ! I0318 12:19:01.663839       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0318 12:44:48.141530    5712 command_runner.go:130] ! I0318 12:19:01.663858       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0318 12:44:48.141600    5712 command_runner.go:130] ! I0318 12:19:01.710104       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0318 12:44:48.141600    5712 command_runner.go:130] ! I0318 12:19:01.710384       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0318 12:44:48.141600    5712 command_runner.go:130] ! I0318 12:19:01.710455       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0318 12:44:48.141600    5712 command_runner.go:130] ! I0318 12:19:01.710487       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0318 12:44:48.141681    5712 command_runner.go:130] ! I0318 12:19:01.710760       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0318 12:44:48.141681    5712 command_runner.go:130] ! I0318 12:19:01.710795       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0318 12:44:48.141681    5712 command_runner.go:130] ! I0318 12:19:01.710822       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0318 12:44:48.141739    5712 command_runner.go:130] ! I0318 12:19:01.710886       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0318 12:44:48.141739    5712 command_runner.go:130] ! I0318 12:19:01.710930       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0318 12:44:48.141780    5712 command_runner.go:130] ! I0318 12:19:01.710986       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0318 12:44:48.141780    5712 command_runner.go:130] ! I0318 12:19:01.711095       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0318 12:44:48.141780    5712 command_runner.go:130] ! I0318 12:19:01.711137       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0318 12:44:48.141877    5712 command_runner.go:130] ! I0318 12:19:01.711160       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0318 12:44:48.141877    5712 command_runner.go:130] ! I0318 12:19:01.711179       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0318 12:44:48.141877    5712 command_runner.go:130] ! I0318 12:19:01.711211       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0318 12:44:48.141877    5712 command_runner.go:130] ! I0318 12:19:01.711237       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0318 12:44:48.141953    5712 command_runner.go:130] ! I0318 12:19:01.711261       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0318 12:44:48.141953    5712 command_runner.go:130] ! I0318 12:19:01.711286       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0318 12:44:48.141953    5712 command_runner.go:130] ! I0318 12:19:01.711316       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0318 12:44:48.141953    5712 command_runner.go:130] ! I0318 12:19:01.711339       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0318 12:44:48.141953    5712 command_runner.go:130] ! I0318 12:19:01.711356       1 controllermanager.go:642] "Started controller" controller="resourcequota-controller"
	I0318 12:44:48.141953    5712 command_runner.go:130] ! I0318 12:19:01.711486       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0318 12:44:48.141953    5712 command_runner.go:130] ! I0318 12:19:01.711654       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 12:44:48.141953    5712 command_runner.go:130] ! I0318 12:19:01.711784       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0318 12:44:48.141953    5712 command_runner.go:130] ! I0318 12:19:01.715155       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0318 12:44:48.141953    5712 command_runner.go:130] ! I0318 12:19:01.715586       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0318 12:44:48.141953    5712 command_runner.go:130] ! I0318 12:19:01.715886       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0318 12:44:48.141953    5712 command_runner.go:130] ! I0318 12:19:01.732340       1 controllermanager.go:642] "Started controller" controller="ttl-controller"
	I0318 12:44:48.141953    5712 command_runner.go:130] ! I0318 12:19:01.732695       1 ttl_controller.go:124] "Starting TTL controller"
	I0318 12:44:48.141953    5712 command_runner.go:130] ! I0318 12:19:01.732944       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0318 12:44:48.141953    5712 command_runner.go:130] ! I0318 12:19:01.747011       1 controllermanager.go:642] "Started controller" controller="endpoints-controller"
	I0318 12:44:48.141953    5712 command_runner.go:130] ! I0318 12:19:01.747361       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0318 12:44:48.141953    5712 command_runner.go:130] ! I0318 12:19:01.747484       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0318 12:44:48.141953    5712 command_runner.go:130] ! E0318 12:19:01.771424       1 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0318 12:44:48.142344    5712 command_runner.go:130] ! I0318 12:19:01.771527       1 controllermanager.go:620] "Warning: skipping controller" controller="service-lb-controller"
	I0318 12:44:48.142428    5712 command_runner.go:130] ! I0318 12:19:01.771544       1 core.go:228] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0318 12:44:48.142428    5712 command_runner.go:130] ! I0318 12:19:01.772072       1 controllermanager.go:620] "Warning: skipping controller" controller="node-route-controller"
	I0318 12:44:48.142488    5712 command_runner.go:130] ! E0318 12:19:01.775461       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0318 12:44:48.143064    5712 command_runner.go:130] ! I0318 12:19:01.775656       1 controllermanager.go:620] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0318 12:44:48.143130    5712 command_runner.go:130] ! I0318 12:19:01.788795       1 controllermanager.go:642] "Started controller" controller="ephemeral-volume-controller"
	I0318 12:44:48.143130    5712 command_runner.go:130] ! I0318 12:19:01.789335       1 controller.go:169] "Starting ephemeral volume controller"
	I0318 12:44:48.143169    5712 command_runner.go:130] ! I0318 12:19:01.789368       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0318 12:44:48.143169    5712 command_runner.go:130] ! I0318 12:19:01.809091       1 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
	I0318 12:44:48.143216    5712 command_runner.go:130] ! I0318 12:19:01.809368       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0318 12:44:48.143216    5712 command_runner.go:130] ! I0318 12:19:01.809720       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0318 12:44:48.143216    5712 command_runner.go:130] ! I0318 12:19:01.846190       1 controllermanager.go:642] "Started controller" controller="daemonset-controller"
	I0318 12:44:48.143216    5712 command_runner.go:130] ! I0318 12:19:01.846779       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0318 12:44:48.143284    5712 command_runner.go:130] ! I0318 12:19:01.846879       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0318 12:44:48.143284    5712 command_runner.go:130] ! I0318 12:19:02.137994       1 controllermanager.go:642] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0318 12:44:48.144098    5712 command_runner.go:130] ! I0318 12:19:02.138059       1 horizontal.go:200] "Starting HPA controller"
	I0318 12:44:48.144168    5712 command_runner.go:130] ! I0318 12:19:02.138069       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0318 12:44:48.144268    5712 command_runner.go:130] ! I0318 12:19:02.189502       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0318 12:44:48.144268    5712 command_runner.go:130] ! I0318 12:19:02.189864       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0318 12:44:48.144268    5712 command_runner.go:130] ! I0318 12:19:02.190041       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:48.144268    5712 command_runner.go:130] ! I0318 12:19:02.191172       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0318 12:44:48.144841    5712 command_runner.go:130] ! I0318 12:19:02.191256       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0318 12:44:48.144928    5712 command_runner.go:130] ! I0318 12:19:02.191347       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:48.144971    5712 command_runner.go:130] ! I0318 12:19:02.193057       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0318 12:44:48.144971    5712 command_runner.go:130] ! I0318 12:19:02.193152       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0318 12:44:48.144971    5712 command_runner.go:130] ! I0318 12:19:02.193246       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:48.145107    5712 command_runner.go:130] ! I0318 12:19:02.194807       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0318 12:44:48.145107    5712 command_runner.go:130] ! I0318 12:19:02.194851       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0318 12:44:48.145107    5712 command_runner.go:130] ! I0318 12:19:02.195648       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0318 12:44:48.145168    5712 command_runner.go:130] ! I0318 12:19:02.194886       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:48.145168    5712 command_runner.go:130] ! I0318 12:19:02.345061       1 controllermanager.go:642] "Started controller" controller="bootstrap-signer-controller"
	I0318 12:44:48.145168    5712 command_runner.go:130] ! I0318 12:19:02.347311       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0318 12:44:48.145168    5712 command_runner.go:130] ! I0318 12:19:02.364524       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 12:44:48.145168    5712 command_runner.go:130] ! I0318 12:19:02.380069       1 shared_informer.go:318] Caches are synced for expand
	I0318 12:44:48.145306    5712 command_runner.go:130] ! I0318 12:19:02.390503       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0318 12:44:48.145306    5712 command_runner.go:130] ! I0318 12:19:02.391317       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0318 12:44:48.145306    5712 command_runner.go:130] ! I0318 12:19:02.393201       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0318 12:44:48.145377    5712 command_runner.go:130] ! I0318 12:19:02.402532       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0318 12:44:48.145377    5712 command_runner.go:130] ! I0318 12:19:02.419971       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0318 12:44:48.145445    5712 command_runner.go:130] ! I0318 12:19:02.421082       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600\" does not exist"
	I0318 12:44:48.145445    5712 command_runner.go:130] ! I0318 12:19:02.427201       1 shared_informer.go:318] Caches are synced for persistent volume
	I0318 12:44:48.145445    5712 command_runner.go:130] ! I0318 12:19:02.427876       1 shared_informer.go:318] Caches are synced for service account
	I0318 12:44:48.145445    5712 command_runner.go:130] ! I0318 12:19:02.429003       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 12:44:48.145547    5712 command_runner.go:130] ! I0318 12:19:02.429629       1 shared_informer.go:318] Caches are synced for cronjob
	I0318 12:44:48.145547    5712 command_runner.go:130] ! I0318 12:19:02.430311       1 shared_informer.go:318] Caches are synced for PV protection
	I0318 12:44:48.145547    5712 command_runner.go:130] ! I0318 12:19:02.432115       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0318 12:44:48.145547    5712 command_runner.go:130] ! I0318 12:19:02.434603       1 shared_informer.go:318] Caches are synced for TTL
	I0318 12:44:48.145547    5712 command_runner.go:130] ! I0318 12:19:02.437362       1 shared_informer.go:318] Caches are synced for deployment
	I0318 12:44:48.145547    5712 command_runner.go:130] ! I0318 12:19:02.438306       1 shared_informer.go:318] Caches are synced for HPA
	I0318 12:44:48.145547    5712 command_runner.go:130] ! I0318 12:19:02.441785       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0318 12:44:48.145547    5712 command_runner.go:130] ! I0318 12:19:02.442916       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0318 12:44:48.145547    5712 command_runner.go:130] ! I0318 12:19:02.444302       1 shared_informer.go:318] Caches are synced for disruption
	I0318 12:44:48.145679    5712 command_runner.go:130] ! I0318 12:19:02.447137       1 shared_informer.go:318] Caches are synced for daemon sets
	I0318 12:44:48.145679    5712 command_runner.go:130] ! I0318 12:19:02.447694       1 shared_informer.go:318] Caches are synced for endpoint
	I0318 12:44:48.145739    5712 command_runner.go:130] ! I0318 12:19:02.452098       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0318 12:44:48.145739    5712 command_runner.go:130] ! I0318 12:19:02.454023       1 shared_informer.go:318] Caches are synced for stateful set
	I0318 12:44:48.145790    5712 command_runner.go:130] ! I0318 12:19:02.461158       1 shared_informer.go:318] Caches are synced for crt configmap
	I0318 12:44:48.145855    5712 command_runner.go:130] ! I0318 12:19:02.464623       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0318 12:44:48.145855    5712 command_runner.go:130] ! I0318 12:19:02.480847       1 shared_informer.go:318] Caches are synced for GC
	I0318 12:44:48.145855    5712 command_runner.go:130] ! I0318 12:19:02.487772       1 shared_informer.go:318] Caches are synced for namespace
	I0318 12:44:48.145855    5712 command_runner.go:130] ! I0318 12:19:02.490082       1 shared_informer.go:318] Caches are synced for ephemeral
	I0318 12:44:48.145963    5712 command_runner.go:130] ! I0318 12:19:02.494160       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0318 12:44:48.145963    5712 command_runner.go:130] ! I0318 12:19:02.499312       1 shared_informer.go:318] Caches are synced for node
	I0318 12:44:48.145963    5712 command_runner.go:130] ! I0318 12:19:02.499587       1 range_allocator.go:174] "Sending events to api server"
	I0318 12:44:48.145963    5712 command_runner.go:130] ! I0318 12:19:02.499772       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0318 12:44:48.145963    5712 command_runner.go:130] ! I0318 12:19:02.500365       1 shared_informer.go:318] Caches are synced for attach detach
	I0318 12:44:48.145963    5712 command_runner.go:130] ! I0318 12:19:02.500954       1 shared_informer.go:318] Caches are synced for job
	I0318 12:44:48.145963    5712 command_runner.go:130] ! I0318 12:19:02.501438       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0318 12:44:48.145963    5712 command_runner.go:130] ! I0318 12:19:02.501724       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0318 12:44:48.145963    5712 command_runner.go:130] ! I0318 12:19:02.503931       1 shared_informer.go:318] Caches are synced for PVC protection
	I0318 12:44:48.145963    5712 command_runner.go:130] ! I0318 12:19:02.509883       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0318 12:44:48.145963    5712 command_runner.go:130] ! I0318 12:19:02.528934       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-642600" podCIDRs=["10.244.0.0/24"]
	I0318 12:44:48.146129    5712 command_runner.go:130] ! I0318 12:19:02.565942       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 12:44:48.146129    5712 command_runner.go:130] ! I0318 12:19:02.603468       1 shared_informer.go:318] Caches are synced for taint
	I0318 12:44:48.146129    5712 command_runner.go:130] ! I0318 12:19:02.603627       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0318 12:44:48.146129    5712 command_runner.go:130] ! I0318 12:19:02.603721       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-642600"
	I0318 12:44:48.146129    5712 command_runner.go:130] ! I0318 12:19:02.603760       1 node_lifecycle_controller.go:1029] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0318 12:44:48.146129    5712 command_runner.go:130] ! I0318 12:19:02.603782       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0318 12:44:48.146129    5712 command_runner.go:130] ! I0318 12:19:02.603821       1 taint_manager.go:210] "Sending events to api server"
	I0318 12:44:48.146129    5712 command_runner.go:130] ! I0318 12:19:02.605481       1 event.go:307] "Event occurred" object="multinode-642600" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600 event: Registered Node multinode-642600 in Controller"
	I0318 12:44:48.146129    5712 command_runner.go:130] ! I0318 12:19:02.613688       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 12:44:48.146281    5712 command_runner.go:130] ! I0318 12:19:02.644197       1 event.go:307] "Event occurred" object="kube-system/etcd-multinode-642600" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:48.146281    5712 command_runner.go:130] ! I0318 12:19:02.675188       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-multinode-642600" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:48.146281    5712 command_runner.go:130] ! I0318 12:19:02.675510       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-multinode-642600" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:48.146281    5712 command_runner.go:130] ! I0318 12:19:02.681286       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-multinode-642600" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:48.146281    5712 command_runner.go:130] ! I0318 12:19:03.023915       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 12:44:48.146428    5712 command_runner.go:130] ! I0318 12:19:03.023946       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0318 12:44:48.146428    5712 command_runner.go:130] ! I0318 12:19:03.029139       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 12:44:48.146428    5712 command_runner.go:130] ! I0318 12:19:03.075135       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I0318 12:44:48.146428    5712 command_runner.go:130] ! I0318 12:19:03.175071       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-kpt4f"
	I0318 12:44:48.146428    5712 command_runner.go:130] ! I0318 12:19:03.181384       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-4dg79"
	I0318 12:44:48.146428    5712 command_runner.go:130] ! I0318 12:19:03.624405       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-fgn7v"
	I0318 12:44:48.146571    5712 command_runner.go:130] ! I0318 12:19:03.691902       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-xkgdt"
	I0318 12:44:48.146571    5712 command_runner.go:130] ! I0318 12:19:03.810454       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="734.97569ms"
	I0318 12:44:48.146571    5712 command_runner.go:130] ! I0318 12:19:03.847906       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="37.087083ms"
	I0318 12:44:48.146571    5712 command_runner.go:130] ! I0318 12:19:03.945758       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="97.729709ms"
	I0318 12:44:48.146571    5712 command_runner.go:130] ! I0318 12:19:03.945958       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="107.501µs"
	I0318 12:44:48.146696    5712 command_runner.go:130] ! I0318 12:19:04.640409       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0318 12:44:48.146696    5712 command_runner.go:130] ! I0318 12:19:04.732241       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-xkgdt"
	I0318 12:44:48.146696    5712 command_runner.go:130] ! I0318 12:19:04.763359       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="121.567183ms"
	I0318 12:44:48.146767    5712 command_runner.go:130] ! I0318 12:19:04.828298       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="64.870031ms"
	I0318 12:44:48.146767    5712 command_runner.go:130] ! I0318 12:19:04.890459       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.083804ms"
	I0318 12:44:48.146767    5712 command_runner.go:130] ! I0318 12:19:04.890764       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.4µs"
	I0318 12:44:48.146822    5712 command_runner.go:130] ! I0318 12:19:15.938090       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="157.9µs"
	I0318 12:44:48.146822    5712 command_runner.go:130] ! I0318 12:19:15.982953       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="121.301µs"
	I0318 12:44:48.146901    5712 command_runner.go:130] ! I0318 12:19:17.607464       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0318 12:44:48.146901    5712 command_runner.go:130] ! I0318 12:19:19.208242       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="102.7µs"
	I0318 12:44:48.146901    5712 command_runner.go:130] ! I0318 12:19:19.274086       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="22.124146ms"
	I0318 12:44:48.146901    5712 command_runner.go:130] ! I0318 12:19:19.275145       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="211.9µs"
	I0318 12:44:48.146901    5712 command_runner.go:130] ! I0318 12:22:12.652722       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600-m02\" does not exist"
	I0318 12:44:48.146901    5712 command_runner.go:130] ! I0318 12:22:12.679760       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-642600-m02" podCIDRs=["10.244.1.0/24"]
	I0318 12:44:48.146901    5712 command_runner.go:130] ! I0318 12:22:12.706735       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-d5llj"
	I0318 12:44:48.146901    5712 command_runner.go:130] ! I0318 12:22:12.706774       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-vts9f"
	I0318 12:44:48.146901    5712 command_runner.go:130] ! I0318 12:22:17.642129       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-642600-m02"
	I0318 12:44:48.146901    5712 command_runner.go:130] ! I0318 12:22:17.642212       1 event.go:307] "Event occurred" object="multinode-642600-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600-m02 event: Registered Node multinode-642600-m02 in Controller"
	I0318 12:44:48.146901    5712 command_runner.go:130] ! I0318 12:22:34.263318       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:48.146901    5712 command_runner.go:130] ! I0318 12:23:01.851486       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5b5d89c9d6 to 2"
	I0318 12:44:48.146901    5712 command_runner.go:130] ! I0318 12:23:01.881281       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-hmhdf"
	I0318 12:44:48.146901    5712 command_runner.go:130] ! I0318 12:23:01.924301       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-48qkw"
	I0318 12:44:48.146901    5712 command_runner.go:130] ! I0318 12:23:01.946058       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="91.676064ms"
	I0318 12:44:48.146901    5712 command_runner.go:130] ! I0318 12:23:02.049702       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="103.251772ms"
	I0318 12:44:48.146901    5712 command_runner.go:130] ! I0318 12:23:02.049789       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="35.4µs"
	I0318 12:44:48.146901    5712 command_runner.go:130] ! I0318 12:23:04.783277       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="15.030749ms"
	I0318 12:44:48.146901    5712 command_runner.go:130] ! I0318 12:23:04.783520       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="39.9µs"
	I0318 12:44:48.146901    5712 command_runner.go:130] ! I0318 12:23:05.441638       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="14.350047ms"
	I0318 12:44:48.146901    5712 command_runner.go:130] ! I0318 12:23:05.441876       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="105µs"
	I0318 12:44:48.146901    5712 command_runner.go:130] ! I0318 12:27:09.073772       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600-m03\" does not exist"
	I0318 12:44:48.147469    5712 command_runner.go:130] ! I0318 12:27:09.075345       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:48.147469    5712 command_runner.go:130] ! I0318 12:27:09.095707       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-642600-m03" podCIDRs=["10.244.2.0/24"]
	I0318 12:44:48.147469    5712 command_runner.go:130] ! I0318 12:27:09.110695       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-khbjt"
	I0318 12:44:48.147469    5712 command_runner.go:130] ! I0318 12:27:09.110730       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-thkjp"
	I0318 12:44:48.147469    5712 command_runner.go:130] ! I0318 12:27:12.715112       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-642600-m03"
	I0318 12:44:48.147469    5712 command_runner.go:130] ! I0318 12:27:12.715611       1 event.go:307] "Event occurred" object="multinode-642600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600-m03 event: Registered Node multinode-642600-m03 in Controller"
	I0318 12:44:48.147469    5712 command_runner.go:130] ! I0318 12:27:30.856729       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:48.147582    5712 command_runner.go:130] ! I0318 12:35:52.853028       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:48.147582    5712 command_runner.go:130] ! I0318 12:35:52.854041       1 event.go:307] "Event occurred" object="multinode-642600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-642600-m03 status is now: NodeNotReady"
	I0318 12:44:48.147582    5712 command_runner.go:130] ! I0318 12:35:52.871920       1 event.go:307] "Event occurred" object="kube-system/kindnet-thkjp" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:48.147582    5712 command_runner.go:130] ! I0318 12:35:52.891158       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-khbjt" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:48.147582    5712 command_runner.go:130] ! I0318 12:38:40.101072       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:48.147582    5712 command_runner.go:130] ! I0318 12:38:42.930337       1 event.go:307] "Event occurred" object="multinode-642600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-642600-m03 event: Removing Node multinode-642600-m03 from Controller"
	I0318 12:44:48.147582    5712 command_runner.go:130] ! I0318 12:38:46.825246       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:48.147582    5712 command_runner.go:130] ! I0318 12:38:46.827225       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600-m03\" does not exist"
	I0318 12:44:48.147582    5712 command_runner.go:130] ! I0318 12:38:46.865011       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-642600-m03" podCIDRs=["10.244.3.0/24"]
	I0318 12:44:48.147582    5712 command_runner.go:130] ! I0318 12:38:47.931681       1 event.go:307] "Event occurred" object="multinode-642600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600-m03 event: Registered Node multinode-642600-m03 in Controller"
	I0318 12:44:48.147582    5712 command_runner.go:130] ! I0318 12:38:52.975724       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:48.147582    5712 command_runner.go:130] ! I0318 12:40:33.280094       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:48.147582    5712 command_runner.go:130] ! I0318 12:40:33.281180       1 event.go:307] "Event occurred" object="multinode-642600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-642600-m03 status is now: NodeNotReady"
	I0318 12:44:48.147582    5712 command_runner.go:130] ! I0318 12:40:33.601041       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-khbjt" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:48.147582    5712 command_runner.go:130] ! I0318 12:40:33.698293       1 event.go:307] "Event occurred" object="kube-system/kindnet-thkjp" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:50.685337    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods
	I0318 12:44:50.685337    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:50.685337    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:50.685337    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:50.699158    5712 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0318 12:44:50.699253    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:50.699253    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:50 GMT
	I0318 12:44:50.699253    5712 round_trippers.go:580]     Audit-Id: 8452c9cf-f9ad-4c44-a283-8c14bef22ec7
	I0318 12:44:50.699253    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:50.699253    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:50.699253    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:50.699253    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:50.701336    5712 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2068"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"2054","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83063 chars]
	I0318 12:44:50.705245    5712 system_pods.go:59] 12 kube-system pods found
	I0318 12:44:50.705245    5712 system_pods.go:61] "coredns-5dd5756b68-fgn7v" [7bc52797-b4bd-4046-b3d5-fae9c8ccd13b] Running
	I0318 12:44:50.705245    5712 system_pods.go:61] "etcd-multinode-642600" [6f0ca14e-af4b-4442-8a48-28b69c699976] Running
	I0318 12:44:50.705245    5712 system_pods.go:61] "kindnet-d5llj" [caa4170d-6120-414a-950c-92a0380a70b8] Running
	I0318 12:44:50.705245    5712 system_pods.go:61] "kindnet-kpt4f" [acd9d7a0-0e27-4bbb-8562-6fbf374742ca] Running
	I0318 12:44:50.705245    5712 system_pods.go:61] "kindnet-thkjp" [a7e20c36-c1d1-4146-a66c-40448e1ae0e5] Running
	I0318 12:44:50.705245    5712 system_pods.go:61] "kube-apiserver-multinode-642600" [ab8e6b8b-cbac-4c90-8f57-9af2760ced9c] Running
	I0318 12:44:50.705245    5712 system_pods.go:61] "kube-controller-manager-multinode-642600" [1dd2a576-c5a0-44e5-b194-545e8b18962c] Running
	I0318 12:44:50.705245    5712 system_pods.go:61] "kube-proxy-4dg79" [449242c2-ad12-4da5-b339-3be7ab8a9b16] Running
	I0318 12:44:50.705245    5712 system_pods.go:61] "kube-proxy-khbjt" [594efa46-7e30-40e6-92dd-9c9c80bc787a] Running
	I0318 12:44:50.705245    5712 system_pods.go:61] "kube-proxy-vts9f" [9545be8f-07fd-49dd-99bd-e9e976e65e7b] Running
	I0318 12:44:50.705245    5712 system_pods.go:61] "kube-scheduler-multinode-642600" [52e29d3b-d6e9-4109-916d-74123a2ab190] Running
	I0318 12:44:50.705245    5712 system_pods.go:61] "storage-provisioner" [d2718b8a-26a9-4c86-bf9a-221d1ee23ceb] Running
	I0318 12:44:50.705245    5712 system_pods.go:74] duration metric: took 4.0124068s to wait for pod list to return data ...
	I0318 12:44:50.705245    5712 default_sa.go:34] waiting for default service account to be created ...
	I0318 12:44:50.706017    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/default/serviceaccounts
	I0318 12:44:50.706081    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:50.706081    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:50.706081    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:50.710852    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:50.710852    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:50.711687    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:50.711687    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:50.711687    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:50.711729    5712 round_trippers.go:580]     Content-Length: 262
	I0318 12:44:50.711729    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:50 GMT
	I0318 12:44:50.711729    5712 round_trippers.go:580]     Audit-Id: 8dae92a1-618c-4624-b23a-459299dcdc55
	I0318 12:44:50.711729    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:50.711729    5712 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"2068"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"cb0307d5-001e-4a17-89ea-7a5b4f2963cc","resourceVersion":"344","creationTimestamp":"2024-03-18T12:19:02Z"}}]}
	I0318 12:44:50.712102    5712 default_sa.go:45] found service account: "default"
	I0318 12:44:50.712134    5712 default_sa.go:55] duration metric: took 6.3425ms for default service account to be created ...
	I0318 12:44:50.712134    5712 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 12:44:50.712209    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods
	I0318 12:44:50.712270    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:50.712270    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:50.712270    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:50.721267    5712 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 12:44:50.721267    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:50.721267    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:50.721267    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:50.721267    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:50 GMT
	I0318 12:44:50.721267    5712 round_trippers.go:580]     Audit-Id: 772c77a3-4597-4f1c-8e4b-46abbe90e2a2
	I0318 12:44:50.721267    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:50.721267    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:50.722618    5712 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2068"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"2054","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83063 chars]
	I0318 12:44:50.726985    5712 system_pods.go:86] 12 kube-system pods found
	I0318 12:44:50.726985    5712 system_pods.go:89] "coredns-5dd5756b68-fgn7v" [7bc52797-b4bd-4046-b3d5-fae9c8ccd13b] Running
	I0318 12:44:50.726985    5712 system_pods.go:89] "etcd-multinode-642600" [6f0ca14e-af4b-4442-8a48-28b69c699976] Running
	I0318 12:44:50.726985    5712 system_pods.go:89] "kindnet-d5llj" [caa4170d-6120-414a-950c-92a0380a70b8] Running
	I0318 12:44:50.726985    5712 system_pods.go:89] "kindnet-kpt4f" [acd9d7a0-0e27-4bbb-8562-6fbf374742ca] Running
	I0318 12:44:50.726985    5712 system_pods.go:89] "kindnet-thkjp" [a7e20c36-c1d1-4146-a66c-40448e1ae0e5] Running
	I0318 12:44:50.726985    5712 system_pods.go:89] "kube-apiserver-multinode-642600" [ab8e6b8b-cbac-4c90-8f57-9af2760ced9c] Running
	I0318 12:44:50.726985    5712 system_pods.go:89] "kube-controller-manager-multinode-642600" [1dd2a576-c5a0-44e5-b194-545e8b18962c] Running
	I0318 12:44:50.726985    5712 system_pods.go:89] "kube-proxy-4dg79" [449242c2-ad12-4da5-b339-3be7ab8a9b16] Running
	I0318 12:44:50.726985    5712 system_pods.go:89] "kube-proxy-khbjt" [594efa46-7e30-40e6-92dd-9c9c80bc787a] Running
	I0318 12:44:50.726985    5712 system_pods.go:89] "kube-proxy-vts9f" [9545be8f-07fd-49dd-99bd-e9e976e65e7b] Running
	I0318 12:44:50.726985    5712 system_pods.go:89] "kube-scheduler-multinode-642600" [52e29d3b-d6e9-4109-916d-74123a2ab190] Running
	I0318 12:44:50.726985    5712 system_pods.go:89] "storage-provisioner" [d2718b8a-26a9-4c86-bf9a-221d1ee23ceb] Running
	I0318 12:44:50.726985    5712 system_pods.go:126] duration metric: took 14.8503ms to wait for k8s-apps to be running ...
	I0318 12:44:50.726985    5712 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 12:44:50.739844    5712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:44:50.769823    5712 system_svc.go:56] duration metric: took 42.8377ms WaitForService to wait for kubelet
	I0318 12:44:50.770286    5712 kubeadm.go:576] duration metric: took 1m14.5353016s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 12:44:50.770355    5712 node_conditions.go:102] verifying NodePressure condition ...
	I0318 12:44:50.770491    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes
	I0318 12:44:50.770491    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:50.770576    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:50.770576    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:50.774846    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:50.774846    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:50.774846    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:50.774846    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:50.774846    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:50.774846    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:50.774846    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:50 GMT
	I0318 12:44:50.774846    5712 round_trippers.go:580]     Audit-Id: 051a936f-32d5-465f-8a31-b7014848ef69
	I0318 12:44:50.774846    5712 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2068"},"items":[{"metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16259 chars]
	I0318 12:44:50.776623    5712 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 12:44:50.776685    5712 node_conditions.go:123] node cpu capacity is 2
	I0318 12:44:50.776718    5712 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 12:44:50.776718    5712 node_conditions.go:123] node cpu capacity is 2
	I0318 12:44:50.776718    5712 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 12:44:50.776718    5712 node_conditions.go:123] node cpu capacity is 2
	I0318 12:44:50.776718    5712 node_conditions.go:105] duration metric: took 6.3638ms to run NodePressure ...
	I0318 12:44:50.776778    5712 start.go:240] waiting for startup goroutines ...
	I0318 12:44:50.776778    5712 start.go:245] waiting for cluster config update ...
	I0318 12:44:50.776778    5712 start.go:254] writing updated cluster config ...
	I0318 12:44:50.780972    5712 out.go:177] 
	I0318 12:44:50.783783    5712 config.go:182] Loaded profile config "ha-606900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 12:44:50.796808    5712 config.go:182] Loaded profile config "multinode-642600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 12:44:50.796808    5712 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\config.json ...
	I0318 12:44:50.804019    5712 out.go:177] * Starting "multinode-642600-m02" worker node in "multinode-642600" cluster
	I0318 12:44:50.806600    5712 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 12:44:50.806600    5712 cache.go:56] Caching tarball of preloaded images
	I0318 12:44:50.806600    5712 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0318 12:44:50.806600    5712 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 12:44:50.806600    5712 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\config.json ...
	I0318 12:44:50.809297    5712 start.go:360] acquireMachinesLock for multinode-642600-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 12:44:50.810322    5712 start.go:364] duration metric: took 1.0247ms to acquireMachinesLock for "multinode-642600-m02"
	I0318 12:44:50.810622    5712 start.go:96] Skipping create...Using existing machine configuration
	I0318 12:44:50.810622    5712 fix.go:54] fixHost starting: m02
	I0318 12:44:50.810897    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600-m02 ).state
	I0318 12:44:53.078575    5712 main.go:141] libmachine: [stdout =====>] : Off
	
	I0318 12:44:53.078575    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:44:53.078706    5712 fix.go:112] recreateIfNeeded on multinode-642600-m02: state=Stopped err=<nil>
	W0318 12:44:53.078706    5712 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 12:44:53.082461    5712 out.go:177] * Restarting existing hyperv VM for "multinode-642600-m02" ...
	I0318 12:44:53.086092    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-642600-m02
	I0318 12:44:56.372131    5712 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:44:56.372131    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:44:56.372131    5712 main.go:141] libmachine: Waiting for host to start...
	I0318 12:44:56.372131    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600-m02 ).state
	I0318 12:44:58.726364    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:44:58.726766    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:44:58.726859    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:45:01.322171    5712 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:45:01.323088    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:02.332172    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600-m02 ).state
	I0318 12:45:04.660156    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:45:04.660156    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:04.660156    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:45:07.359664    5712 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:45:07.359664    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:08.370665    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600-m02 ).state
	I0318 12:45:10.686002    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:45:10.686135    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:10.686238    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:45:13.334579    5712 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:45:13.335669    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:14.342955    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600-m02 ).state
	I0318 12:45:16.674409    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:45:16.674409    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:16.674558    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:45:19.368002    5712 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:45:19.368002    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:20.382337    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600-m02 ).state
	I0318 12:45:22.700522    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:45:22.701157    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:22.701157    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:45:25.463389    5712 main.go:141] libmachine: [stdout =====>] : 172.25.144.186
	
	I0318 12:45:25.463389    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:25.467285    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600-m02 ).state
	I0318 12:45:27.715593    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:45:27.715593    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:27.716525    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:45:30.430539    5712 main.go:141] libmachine: [stdout =====>] : 172.25.144.186
	
	I0318 12:45:30.430649    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:30.430912    5712 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\config.json ...
	I0318 12:45:30.433568    5712 machine.go:94] provisionDockerMachine start ...
	I0318 12:45:30.433568    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600-m02 ).state
	I0318 12:45:32.660187    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:45:32.660509    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:32.660578    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600-m02 ).networkadapters[0]).ipaddresses[0]

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-windows-amd64.exe node list -p multinode-642600" : exit status 1
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-642600
multinode_test.go:331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node list -p multinode-642600: context deadline exceeded (0s)
multinode_test.go:333: failed to run node list. args "out/minikube-windows-amd64.exe node list -p multinode-642600" : context deadline exceeded
multinode_test.go:338: reported node list is not the same after restart. Before restart: multinode-642600	172.25.151.112
multinode-642600-m02	172.25.159.102
multinode-642600-m03	172.25.157.200

                                                
                                                
After restart: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-642600 -n multinode-642600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-642600 -n multinode-642600: (12.8515761s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-642600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-642600 logs -n 25: (11.6952463s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| cp      | multinode-642600 cp testdata\cp-test.txt                                                                                 | multinode-642600 | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:31 UTC | 18 Mar 24 12:31 UTC |
	|         | multinode-642600-m02:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-642600 ssh -n                                                                                                  | multinode-642600 | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:31 UTC | 18 Mar 24 12:31 UTC |
	|         | multinode-642600-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-642600 cp multinode-642600-m02:/home/docker/cp-test.txt                                                        | multinode-642600 | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:31 UTC | 18 Mar 24 12:31 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile4179459229\001\cp-test_multinode-642600-m02.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-642600 ssh -n                                                                                                  | multinode-642600 | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:31 UTC | 18 Mar 24 12:31 UTC |
	|         | multinode-642600-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-642600 cp multinode-642600-m02:/home/docker/cp-test.txt                                                        | multinode-642600 | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:31 UTC | 18 Mar 24 12:32 UTC |
	|         | multinode-642600:/home/docker/cp-test_multinode-642600-m02_multinode-642600.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-642600 ssh -n                                                                                                  | multinode-642600 | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:32 UTC | 18 Mar 24 12:32 UTC |
	|         | multinode-642600-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-642600 ssh -n multinode-642600 sudo cat                                                                        | multinode-642600 | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:32 UTC | 18 Mar 24 12:32 UTC |
	|         | /home/docker/cp-test_multinode-642600-m02_multinode-642600.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-642600 cp multinode-642600-m02:/home/docker/cp-test.txt                                                        | multinode-642600 | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:32 UTC | 18 Mar 24 12:32 UTC |
	|         | multinode-642600-m03:/home/docker/cp-test_multinode-642600-m02_multinode-642600-m03.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-642600 ssh -n                                                                                                  | multinode-642600 | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:32 UTC | 18 Mar 24 12:32 UTC |
	|         | multinode-642600-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-642600 ssh -n multinode-642600-m03 sudo cat                                                                    | multinode-642600 | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:32 UTC | 18 Mar 24 12:33 UTC |
	|         | /home/docker/cp-test_multinode-642600-m02_multinode-642600-m03.txt                                                       |                  |                   |         |                     |                     |
	| cp      | multinode-642600 cp testdata\cp-test.txt                                                                                 | multinode-642600 | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:33 UTC | 18 Mar 24 12:33 UTC |
	|         | multinode-642600-m03:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-642600 ssh -n                                                                                                  | multinode-642600 | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:33 UTC | 18 Mar 24 12:33 UTC |
	|         | multinode-642600-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-642600 cp multinode-642600-m03:/home/docker/cp-test.txt                                                        | multinode-642600 | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:33 UTC | 18 Mar 24 12:33 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile4179459229\001\cp-test_multinode-642600-m03.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-642600 ssh -n                                                                                                  | multinode-642600 | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:33 UTC | 18 Mar 24 12:33 UTC |
	|         | multinode-642600-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-642600 cp multinode-642600-m03:/home/docker/cp-test.txt                                                        | multinode-642600 | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:33 UTC | 18 Mar 24 12:34 UTC |
	|         | multinode-642600:/home/docker/cp-test_multinode-642600-m03_multinode-642600.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-642600 ssh -n                                                                                                  | multinode-642600 | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:34 UTC | 18 Mar 24 12:34 UTC |
	|         | multinode-642600-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-642600 ssh -n multinode-642600 sudo cat                                                                        | multinode-642600 | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:34 UTC | 18 Mar 24 12:34 UTC |
	|         | /home/docker/cp-test_multinode-642600-m03_multinode-642600.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-642600 cp multinode-642600-m03:/home/docker/cp-test.txt                                                        | multinode-642600 | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:34 UTC | 18 Mar 24 12:34 UTC |
	|         | multinode-642600-m02:/home/docker/cp-test_multinode-642600-m03_multinode-642600-m02.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-642600 ssh -n                                                                                                  | multinode-642600 | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:34 UTC | 18 Mar 24 12:34 UTC |
	|         | multinode-642600-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-642600 ssh -n multinode-642600-m02 sudo cat                                                                    | multinode-642600 | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:34 UTC | 18 Mar 24 12:35 UTC |
	|         | /home/docker/cp-test_multinode-642600-m03_multinode-642600-m02.txt                                                       |                  |                   |         |                     |                     |
	| node    | multinode-642600 node stop m03                                                                                           | multinode-642600 | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:35 UTC | 18 Mar 24 12:35 UTC |
	| node    | multinode-642600 node start                                                                                              | multinode-642600 | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:36 UTC | 18 Mar 24 12:38 UTC |
	|         | m03 -v=7 --alsologtostderr                                                                                               |                  |                   |         |                     |                     |
	| node    | list -p multinode-642600                                                                                                 | multinode-642600 | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:39 UTC |                     |
	| stop    | -p multinode-642600                                                                                                      | multinode-642600 | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:39 UTC | 18 Mar 24 12:41 UTC |
	| start   | -p multinode-642600                                                                                                      | multinode-642600 | minikube6\jenkins | v1.32.0 | 18 Mar 24 12:41 UTC |                     |
	|         | --wait=true -v=8                                                                                                         |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                                        |                  |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 12:41:14
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 12:41:14.174495    5712 out.go:291] Setting OutFile to fd 1340 ...
	I0318 12:41:14.175337    5712 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:41:14.175337    5712 out.go:304] Setting ErrFile to fd 1412...
	I0318 12:41:14.175337    5712 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:41:14.198574    5712 out.go:298] Setting JSON to false
	I0318 12:41:14.201477    5712 start.go:129] hostinfo: {"hostname":"minikube6","uptime":140998,"bootTime":1710624675,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0318 12:41:14.201477    5712 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 12:41:14.339915    5712 out.go:177] * [multinode-642600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	I0318 12:41:14.345294    5712 notify.go:220] Checking for updates...
	I0318 12:41:14.393369    5712 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0318 12:41:14.597473    5712 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 12:41:14.752675    5712 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0318 12:41:14.939023    5712 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 12:41:15.132692    5712 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 12:41:15.191577    5712 config.go:182] Loaded profile config "multinode-642600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 12:41:15.192298    5712 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 12:41:21.044712    5712 out.go:177] * Using the hyperv driver based on existing profile
	I0318 12:41:21.232738    5712 start.go:297] selected driver: hyperv
	I0318 12:41:21.232837    5712 start.go:901] validating driver "hyperv" against &{Name:multinode-642600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.28.4 ClusterName:multinode-642600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.151.112 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.159.102 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.25.157.200 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress
:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 12:41:21.233162    5712 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 12:41:21.289013    5712 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 12:41:21.289420    5712 cni.go:84] Creating CNI manager for ""
	I0318 12:41:21.289420    5712 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0318 12:41:21.289851    5712 start.go:340] cluster config:
	{Name:multinode-642600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-642600 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.151.112 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.159.102 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.25.157.200 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisio
ner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 12:41:21.290276    5712 iso.go:125] acquiring lock: {Name:mk859ea173f7c19f70b69d7017f4a5a661cd1500 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 12:41:21.434548    5712 out.go:177] * Starting "multinode-642600" primary control-plane node in "multinode-642600" cluster
	I0318 12:41:21.453985    5712 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 12:41:21.455138    5712 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0318 12:41:21.455138    5712 cache.go:56] Caching tarball of preloaded images
	I0318 12:41:21.455845    5712 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0318 12:41:21.455918    5712 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 12:41:21.456298    5712 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\config.json ...
	I0318 12:41:21.460836    5712 start.go:360] acquireMachinesLock for multinode-642600: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 12:41:21.461043    5712 start.go:364] duration metric: took 106.3µs to acquireMachinesLock for "multinode-642600"
	I0318 12:41:21.461240    5712 start.go:96] Skipping create...Using existing machine configuration
	I0318 12:41:21.461332    5712 fix.go:54] fixHost starting: 
	I0318 12:41:21.461683    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:41:24.316826    5712 main.go:141] libmachine: [stdout =====>] : Off
	
	I0318 12:41:24.317689    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:41:24.317689    5712 fix.go:112] recreateIfNeeded on multinode-642600: state=Stopped err=<nil>
	W0318 12:41:24.317689    5712 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 12:41:24.342056    5712 out.go:177] * Restarting existing hyperv VM for "multinode-642600" ...
	I0318 12:41:24.530542    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-642600
	I0318 12:41:27.730125    5712 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:41:27.730125    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:41:27.730125    5712 main.go:141] libmachine: Waiting for host to start...
	I0318 12:41:27.730125    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:41:30.053094    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:41:30.053154    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:41:30.053154    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:41:32.613670    5712 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:41:32.613670    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:41:33.623916    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:41:35.862239    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:41:35.862239    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:41:35.862239    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:41:38.475717    5712 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:41:38.475717    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:41:39.479064    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:41:41.712232    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:41:41.712666    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:41:41.712666    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:41:44.354805    5712 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:41:44.354805    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:41:45.359948    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:41:47.681725    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:41:47.681784    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:41:47.681784    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:41:50.321516    5712 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:41:50.321516    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:41:51.327615    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:41:53.627594    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:41:53.628519    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:41:53.628694    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:41:56.272362    5712 main.go:141] libmachine: [stdout =====>] : 172.25.148.129
	
	I0318 12:41:56.272362    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:41:56.276226    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:41:58.473377    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:41:58.473377    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:41:58.473817    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:42:01.099563    5712 main.go:141] libmachine: [stdout =====>] : 172.25.148.129
	
	I0318 12:42:01.099563    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:01.100689    5712 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\config.json ...
	I0318 12:42:01.104150    5712 machine.go:94] provisionDockerMachine start ...
	I0318 12:42:01.104150    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:42:03.302275    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:42:03.302275    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:03.302366    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:42:05.967935    5712 main.go:141] libmachine: [stdout =====>] : 172.25.148.129
	
	I0318 12:42:05.967935    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:05.974688    5712 main.go:141] libmachine: Using SSH client type: native
	I0318 12:42:05.975228    5712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.129 22 <nil> <nil>}
	I0318 12:42:05.975319    5712 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 12:42:06.112098    5712 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 12:42:06.112176    5712 buildroot.go:166] provisioning hostname "multinode-642600"
	I0318 12:42:06.112306    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:42:08.281483    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:42:08.281701    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:08.281701    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:42:10.900781    5712 main.go:141] libmachine: [stdout =====>] : 172.25.148.129
	
	I0318 12:42:10.900781    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:10.906449    5712 main.go:141] libmachine: Using SSH client type: native
	I0318 12:42:10.906591    5712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.129 22 <nil> <nil>}
	I0318 12:42:10.906591    5712 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-642600 && echo "multinode-642600" | sudo tee /etc/hostname
	I0318 12:42:11.066386    5712 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-642600
	
	I0318 12:42:11.066386    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:42:13.270565    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:42:13.270565    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:13.270565    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:42:15.963428    5712 main.go:141] libmachine: [stdout =====>] : 172.25.148.129
	
	I0318 12:42:15.963428    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:15.970100    5712 main.go:141] libmachine: Using SSH client type: native
	I0318 12:42:15.970699    5712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.129 22 <nil> <nil>}
	I0318 12:42:15.970699    5712 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-642600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-642600/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-642600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 12:42:16.124542    5712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 12:42:16.124542    5712 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0318 12:42:16.124542    5712 buildroot.go:174] setting up certificates
	I0318 12:42:16.124542    5712 provision.go:84] configureAuth start
	I0318 12:42:16.124542    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:42:18.322060    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:42:18.322060    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:18.322462    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:42:21.007290    5712 main.go:141] libmachine: [stdout =====>] : 172.25.148.129
	
	I0318 12:42:21.007881    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:21.007881    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:42:23.257503    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:42:23.257503    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:23.257670    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:42:25.902074    5712 main.go:141] libmachine: [stdout =====>] : 172.25.148.129
	
	I0318 12:42:25.902281    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:25.902281    5712 provision.go:143] copyHostCerts
	I0318 12:42:25.902463    5712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0318 12:42:25.902463    5712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0318 12:42:25.902463    5712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0318 12:42:25.903350    5712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0318 12:42:25.904398    5712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0318 12:42:25.904819    5712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0318 12:42:25.904819    5712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0318 12:42:25.904819    5712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0318 12:42:25.906175    5712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0318 12:42:25.906433    5712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0318 12:42:25.906433    5712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0318 12:42:25.906433    5712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0318 12:42:25.907646    5712 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-642600 san=[127.0.0.1 172.25.148.129 localhost minikube multinode-642600]
	I0318 12:42:26.286423    5712 provision.go:177] copyRemoteCerts
	I0318 12:42:26.300775    5712 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 12:42:26.300775    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:42:28.522309    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:42:28.522713    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:28.522713    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:42:31.113104    5712 main.go:141] libmachine: [stdout =====>] : 172.25.148.129
	
	I0318 12:42:31.113104    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:31.114054    5712 sshutil.go:53] new ssh client: &{IP:172.25.148.129 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-642600\id_rsa Username:docker}
	I0318 12:42:31.226822    5712 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9260158s)
	I0318 12:42:31.226822    5712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0318 12:42:31.227483    5712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 12:42:31.278696    5712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0318 12:42:31.279683    5712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0318 12:42:31.325681    5712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0318 12:42:31.326161    5712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 12:42:31.372268    5712 provision.go:87] duration metric: took 15.2476311s to configureAuth
	I0318 12:42:31.372410    5712 buildroot.go:189] setting minikube options for container-runtime
	I0318 12:42:31.373444    5712 config.go:182] Loaded profile config "multinode-642600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 12:42:31.373624    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:42:33.576497    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:42:33.576669    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:33.576669    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:42:36.198535    5712 main.go:141] libmachine: [stdout =====>] : 172.25.148.129
	
	I0318 12:42:36.198535    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:36.205452    5712 main.go:141] libmachine: Using SSH client type: native
	I0318 12:42:36.206073    5712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.129 22 <nil> <nil>}
	I0318 12:42:36.206073    5712 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0318 12:42:36.334957    5712 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0318 12:42:36.335033    5712 buildroot.go:70] root file system type: tmpfs
	I0318 12:42:36.335100    5712 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0318 12:42:36.335100    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:42:38.493124    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:42:38.493124    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:38.493124    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:42:41.153303    5712 main.go:141] libmachine: [stdout =====>] : 172.25.148.129
	
	I0318 12:42:41.153303    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:41.162945    5712 main.go:141] libmachine: Using SSH client type: native
	I0318 12:42:41.162945    5712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.129 22 <nil> <nil>}
	I0318 12:42:41.163461    5712 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0318 12:42:41.328264    5712 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0318 12:42:41.328264    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:42:43.465974    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:42:43.465974    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:43.466290    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:42:46.120930    5712 main.go:141] libmachine: [stdout =====>] : 172.25.148.129
	
	I0318 12:42:46.120930    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:46.128155    5712 main.go:141] libmachine: Using SSH client type: native
	I0318 12:42:46.128155    5712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.129 22 <nil> <nil>}
	I0318 12:42:46.128155    5712 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0318 12:42:48.730446    5712 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0318 12:42:48.730596    5712 machine.go:97] duration metric: took 47.6259969s to provisionDockerMachine
	I0318 12:42:48.730596    5712 start.go:293] postStartSetup for "multinode-642600" (driver="hyperv")
	I0318 12:42:48.730596    5712 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 12:42:48.743935    5712 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 12:42:48.743935    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:42:50.958241    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:42:50.958241    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:50.958842    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:42:53.583747    5712 main.go:141] libmachine: [stdout =====>] : 172.25.148.129
	
	I0318 12:42:53.583747    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:53.584310    5712 sshutil.go:53] new ssh client: &{IP:172.25.148.129 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-642600\id_rsa Username:docker}
	I0318 12:42:53.693091    5712 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9490814s)
	I0318 12:42:53.705692    5712 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 12:42:53.716868    5712 command_runner.go:130] > NAME=Buildroot
	I0318 12:42:53.716868    5712 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0318 12:42:53.716868    5712 command_runner.go:130] > ID=buildroot
	I0318 12:42:53.716868    5712 command_runner.go:130] > VERSION_ID=2023.02.9
	I0318 12:42:53.716868    5712 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0318 12:42:53.716868    5712 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 12:42:53.716868    5712 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0318 12:42:53.717834    5712 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0318 12:42:53.718936    5712 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem -> 91202.pem in /etc/ssl/certs
	I0318 12:42:53.718966    5712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem -> /etc/ssl/certs/91202.pem
	I0318 12:42:53.731248    5712 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 12:42:53.749395    5712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem --> /etc/ssl/certs/91202.pem (1708 bytes)
	I0318 12:42:53.800325    5712 start.go:296] duration metric: took 5.069697s for postStartSetup
	I0318 12:42:53.800480    5712 fix.go:56] duration metric: took 1m32.3386007s for fixHost
	I0318 12:42:53.800549    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:42:55.980602    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:42:55.980808    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:55.980862    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:42:58.629512    5712 main.go:141] libmachine: [stdout =====>] : 172.25.148.129
	
	I0318 12:42:58.629512    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:42:58.637365    5712 main.go:141] libmachine: Using SSH client type: native
	I0318 12:42:58.638015    5712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.129 22 <nil> <nil>}
	I0318 12:42:58.638048    5712 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 12:42:58.768996    5712 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710765778.766537739
	
	I0318 12:42:58.769057    5712 fix.go:216] guest clock: 1710765778.766537739
	I0318 12:42:58.769057    5712 fix.go:229] Guest: 2024-03-18 12:42:58.766537739 +0000 UTC Remote: 2024-03-18 12:42:53.8004808 +0000 UTC m=+99.805653901 (delta=4.966056939s)
	I0318 12:42:58.769191    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:43:00.953898    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:43:00.953898    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:43:00.953898    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:43:03.562528    5712 main.go:141] libmachine: [stdout =====>] : 172.25.148.129
	
	I0318 12:43:03.562528    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:43:03.569787    5712 main.go:141] libmachine: Using SSH client type: native
	I0318 12:43:03.570112    5712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe69f80] 0xe6cb60 <nil>  [] 0s} 172.25.148.129 22 <nil> <nil>}
	I0318 12:43:03.570112    5712 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1710765778
	I0318 12:43:03.709754    5712 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Mar 18 12:42:58 UTC 2024
	
	I0318 12:43:03.709754    5712 fix.go:236] clock set: Mon Mar 18 12:42:58 UTC 2024
	 (err=<nil>)
	I0318 12:43:03.709754    5712 start.go:83] releasing machines lock for "multinode-642600", held for 1m42.2480452s
	I0318 12:43:03.710972    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:43:05.896205    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:43:05.896505    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:43:05.896608    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:43:08.523466    5712 main.go:141] libmachine: [stdout =====>] : 172.25.148.129
	
	I0318 12:43:08.523466    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:43:08.527778    5712 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 12:43:08.527778    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:43:08.538570    5712 ssh_runner.go:195] Run: cat /version.json
	I0318 12:43:08.538570    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:43:10.809161    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:43:10.809161    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:43:10.809534    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:43:10.809534    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:43:10.809706    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:43:10.809816    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:43:13.549579    5712 main.go:141] libmachine: [stdout =====>] : 172.25.148.129
	
	I0318 12:43:13.549579    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:43:13.549692    5712 sshutil.go:53] new ssh client: &{IP:172.25.148.129 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-642600\id_rsa Username:docker}
	I0318 12:43:13.576311    5712 main.go:141] libmachine: [stdout =====>] : 172.25.148.129
	
	I0318 12:43:13.576311    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:43:13.576784    5712 sshutil.go:53] new ssh client: &{IP:172.25.148.129 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-642600\id_rsa Username:docker}
	I0318 12:43:13.767080    5712 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0318 12:43:13.767179    5712 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2393684s)
	I0318 12:43:13.767179    5712 command_runner.go:130] > {"iso_version": "v1.32.1-1710520390-17991", "kicbase_version": "v0.0.42-1710284843-18375", "minikube_version": "v1.32.0", "commit": "3dd306d082737a9ddf335108b42c9fcb2ad84298"}
	I0318 12:43:13.767179    5712 ssh_runner.go:235] Completed: cat /version.json: (5.2285766s)
	I0318 12:43:13.780439    5712 ssh_runner.go:195] Run: systemctl --version
	I0318 12:43:13.791580    5712 command_runner.go:130] > systemd 252 (252)
	I0318 12:43:13.791728    5712 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0318 12:43:13.803493    5712 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0318 12:43:13.812744    5712 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0318 12:43:13.813325    5712 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 12:43:13.826084    5712 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 12:43:13.855578    5712 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0318 12:43:13.856454    5712 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 12:43:13.856577    5712 start.go:494] detecting cgroup driver to use...
	I0318 12:43:13.857036    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 12:43:13.897705    5712 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0318 12:43:13.910741    5712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0318 12:43:13.946052    5712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0318 12:43:13.966966    5712 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0318 12:43:13.979953    5712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0318 12:43:14.012337    5712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 12:43:14.044571    5712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0318 12:43:14.078052    5712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 12:43:14.113260    5712 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 12:43:14.145704    5712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0318 12:43:14.181986    5712 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 12:43:14.200399    5712 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0318 12:43:14.212857    5712 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 12:43:14.248658    5712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:43:14.454652    5712 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0318 12:43:14.487097    5712 start.go:494] detecting cgroup driver to use...
	I0318 12:43:14.499933    5712 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0318 12:43:14.522002    5712 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0318 12:43:14.522061    5712 command_runner.go:130] > [Unit]
	I0318 12:43:14.522123    5712 command_runner.go:130] > Description=Docker Application Container Engine
	I0318 12:43:14.522123    5712 command_runner.go:130] > Documentation=https://docs.docker.com
	I0318 12:43:14.522123    5712 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0318 12:43:14.522123    5712 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0318 12:43:14.522123    5712 command_runner.go:130] > StartLimitBurst=3
	I0318 12:43:14.522123    5712 command_runner.go:130] > StartLimitIntervalSec=60
	I0318 12:43:14.522123    5712 command_runner.go:130] > [Service]
	I0318 12:43:14.522123    5712 command_runner.go:130] > Type=notify
	I0318 12:43:14.522123    5712 command_runner.go:130] > Restart=on-failure
	I0318 12:43:14.522123    5712 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0318 12:43:14.522123    5712 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0318 12:43:14.522123    5712 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0318 12:43:14.522123    5712 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0318 12:43:14.522123    5712 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0318 12:43:14.522123    5712 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0318 12:43:14.522123    5712 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0318 12:43:14.522123    5712 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0318 12:43:14.522123    5712 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0318 12:43:14.522123    5712 command_runner.go:130] > ExecStart=
	I0318 12:43:14.522123    5712 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0318 12:43:14.522123    5712 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0318 12:43:14.522123    5712 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0318 12:43:14.522123    5712 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0318 12:43:14.522123    5712 command_runner.go:130] > LimitNOFILE=infinity
	I0318 12:43:14.522123    5712 command_runner.go:130] > LimitNPROC=infinity
	I0318 12:43:14.522123    5712 command_runner.go:130] > LimitCORE=infinity
	I0318 12:43:14.522123    5712 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0318 12:43:14.522123    5712 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0318 12:43:14.522123    5712 command_runner.go:130] > TasksMax=infinity
	I0318 12:43:14.522123    5712 command_runner.go:130] > TimeoutStartSec=0
	I0318 12:43:14.522123    5712 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0318 12:43:14.522123    5712 command_runner.go:130] > Delegate=yes
	I0318 12:43:14.522123    5712 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0318 12:43:14.522123    5712 command_runner.go:130] > KillMode=process
	I0318 12:43:14.522123    5712 command_runner.go:130] > [Install]
	I0318 12:43:14.522123    5712 command_runner.go:130] > WantedBy=multi-user.target
	I0318 12:43:14.532000    5712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 12:43:14.565326    5712 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 12:43:14.611474    5712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 12:43:14.648709    5712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 12:43:14.684182    5712 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0318 12:43:14.750586    5712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 12:43:14.779613    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 12:43:14.826532    5712 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0318 12:43:14.837540    5712 ssh_runner.go:195] Run: which cri-dockerd
	I0318 12:43:14.844394    5712 command_runner.go:130] > /usr/bin/cri-dockerd
	I0318 12:43:14.856947    5712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0318 12:43:14.876240    5712 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0318 12:43:14.926777    5712 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0318 12:43:15.142802    5712 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0318 12:43:15.335751    5712 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0318 12:43:15.335922    5712 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0318 12:43:15.385443    5712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:43:15.603865    5712 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 12:43:18.273925    5712 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6700435s)
	I0318 12:43:18.286887    5712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0318 12:43:18.325011    5712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 12:43:18.362806    5712 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0318 12:43:18.582602    5712 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0318 12:43:18.798246    5712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:43:19.015375    5712 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0318 12:43:19.059889    5712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 12:43:19.095906    5712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:43:19.318696    5712 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0318 12:43:19.432283    5712 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0318 12:43:19.444288    5712 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0318 12:43:19.453286    5712 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0318 12:43:19.453286    5712 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0318 12:43:19.453286    5712 command_runner.go:130] > Device: 0,22	Inode: 849         Links: 1
	I0318 12:43:19.453286    5712 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0318 12:43:19.453286    5712 command_runner.go:130] > Access: 2024-03-18 12:43:19.343967496 +0000
	I0318 12:43:19.453286    5712 command_runner.go:130] > Modify: 2024-03-18 12:43:19.343967496 +0000
	I0318 12:43:19.453286    5712 command_runner.go:130] > Change: 2024-03-18 12:43:19.346967492 +0000
	I0318 12:43:19.453286    5712 command_runner.go:130] >  Birth: -
	I0318 12:43:19.453286    5712 start.go:562] Will wait 60s for crictl version
	I0318 12:43:19.465319    5712 ssh_runner.go:195] Run: which crictl
	I0318 12:43:19.471975    5712 command_runner.go:130] > /usr/bin/crictl
	I0318 12:43:19.485121    5712 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 12:43:19.572755    5712 command_runner.go:130] > Version:  0.1.0
	I0318 12:43:19.572843    5712 command_runner.go:130] > RuntimeName:  docker
	I0318 12:43:19.572843    5712 command_runner.go:130] > RuntimeVersion:  25.0.4
	I0318 12:43:19.572843    5712 command_runner.go:130] > RuntimeApiVersion:  v1
	I0318 12:43:19.572967    5712 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.4
	RuntimeApiVersion:  v1
	I0318 12:43:19.582380    5712 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 12:43:19.617922    5712 command_runner.go:130] > 25.0.4
	I0318 12:43:19.627956    5712 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 12:43:19.664705    5712 command_runner.go:130] > 25.0.4
	I0318 12:43:19.667912    5712 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	I0318 12:43:19.667912    5712 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0318 12:43:19.672492    5712 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0318 12:43:19.672492    5712 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0318 12:43:19.672492    5712 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0318 12:43:19.672492    5712 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:ae:0d:2c Flags:up|broadcast|multicast|running}
	I0318 12:43:19.676312    5712 ip.go:210] interface addr: fe80::f8a6:d6b6:cc4:1ba0/64
	I0318 12:43:19.676312    5712 ip.go:210] interface addr: 172.25.144.1/20
	I0318 12:43:19.690287    5712 ssh_runner.go:195] Run: grep 172.25.144.1	host.minikube.internal$ /etc/hosts
	I0318 12:43:19.697806    5712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 12:43:19.721930    5712 kubeadm.go:877] updating cluster {Name:multinode-642600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-642600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.148.129 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.159.102 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.25.157.200 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 12:43:19.721930    5712 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 12:43:19.732280    5712 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 12:43:19.763775    5712 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0318 12:43:19.763775    5712 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0318 12:43:19.763775    5712 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0318 12:43:19.763775    5712 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0318 12:43:19.763775    5712 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0318 12:43:19.763775    5712 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0318 12:43:19.763775    5712 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0318 12:43:19.763775    5712 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0318 12:43:19.763775    5712 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 12:43:19.763941    5712 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0318 12:43:19.763941    5712 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0318 12:43:19.763941    5712 docker.go:615] Images already preloaded, skipping extraction
	I0318 12:43:19.775255    5712 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 12:43:19.807174    5712 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0318 12:43:19.807442    5712 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0318 12:43:19.807442    5712 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0318 12:43:19.807442    5712 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0318 12:43:19.807442    5712 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0318 12:43:19.807442    5712 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0318 12:43:19.807442    5712 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0318 12:43:19.807442    5712 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0318 12:43:19.807563    5712 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 12:43:19.807563    5712 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0318 12:43:19.807641    5712 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0318 12:43:19.807641    5712 cache_images.go:84] Images are preloaded, skipping loading
	I0318 12:43:19.807641    5712 kubeadm.go:928] updating node { 172.25.148.129 8443 v1.28.4 docker true true} ...
	I0318 12:43:19.807925    5712 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-642600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.148.129
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-642600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 12:43:19.817637    5712 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0318 12:43:19.855673    5712 command_runner.go:130] > cgroupfs
	I0318 12:43:19.855946    5712 cni.go:84] Creating CNI manager for ""
	I0318 12:43:19.855946    5712 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0318 12:43:19.855946    5712 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 12:43:19.855946    5712 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.25.148.129 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-642600 NodeName:multinode-642600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.25.148.129"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.25.148.129 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 12:43:19.855946    5712 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.25.148.129
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-642600"
	  kubeletExtraArgs:
	    node-ip: 172.25.148.129
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.25.148.129"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 12:43:19.869595    5712 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 12:43:19.890835    5712 command_runner.go:130] > kubeadm
	I0318 12:43:19.890894    5712 command_runner.go:130] > kubectl
	I0318 12:43:19.890894    5712 command_runner.go:130] > kubelet
	I0318 12:43:19.890894    5712 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 12:43:19.902612    5712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 12:43:19.923839    5712 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0318 12:43:19.955272    5712 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 12:43:19.987507    5712 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0318 12:43:20.030928    5712 ssh_runner.go:195] Run: grep 172.25.148.129	control-plane.minikube.internal$ /etc/hosts
	I0318 12:43:20.037057    5712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.148.129	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 12:43:20.071648    5712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:43:20.280384    5712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 12:43:20.309389    5712 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600 for IP: 172.25.148.129
	I0318 12:43:20.309389    5712 certs.go:194] generating shared ca certs ...
	I0318 12:43:20.309389    5712 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:43:20.310375    5712 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0318 12:43:20.311438    5712 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0318 12:43:20.311746    5712 certs.go:256] generating profile certs ...
	I0318 12:43:20.312055    5712 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\client.key
	I0318 12:43:20.312780    5712 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\apiserver.key.79698273
	I0318 12:43:20.312780    5712 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\apiserver.crt.79698273 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.148.129]
	I0318 12:43:20.558565    5712 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\apiserver.crt.79698273 ...
	I0318 12:43:20.558565    5712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\apiserver.crt.79698273: {Name:mk2238ba7bfd2f6a337bcb117542d06a7c4668e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:43:20.560290    5712 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\apiserver.key.79698273 ...
	I0318 12:43:20.560290    5712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\apiserver.key.79698273: {Name:mk79ccce3f71f4e955e089d2a0d5269242d694a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:43:20.561767    5712 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\apiserver.crt.79698273 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\apiserver.crt
	I0318 12:43:20.576875    5712 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\apiserver.key.79698273 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\apiserver.key
	I0318 12:43:20.578257    5712 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\proxy-client.key
	I0318 12:43:20.578257    5712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 12:43:20.578483    5712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0318 12:43:20.578604    5712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 12:43:20.578740    5712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 12:43:20.578886    5712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 12:43:20.579128    5712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 12:43:20.579230    5712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 12:43:20.579341    5712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 12:43:20.580232    5712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9120.pem (1338 bytes)
	W0318 12:43:20.580611    5712 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9120_empty.pem, impossibly tiny 0 bytes
	I0318 12:43:20.580719    5712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0318 12:43:20.581083    5712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0318 12:43:20.581300    5712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0318 12:43:20.581520    5712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0318 12:43:20.581966    5712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem (1708 bytes)
	I0318 12:43:20.582239    5712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:43:20.582410    5712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9120.pem -> /usr/share/ca-certificates/9120.pem
	I0318 12:43:20.582610    5712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem -> /usr/share/ca-certificates/91202.pem
	I0318 12:43:20.583560    5712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 12:43:20.635212    5712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 12:43:20.683038    5712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 12:43:20.732428    5712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 12:43:20.783809    5712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 12:43:20.834564    5712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 12:43:20.894689    5712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 12:43:20.946175    5712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 12:43:20.997273    5712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 12:43:21.044263    5712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9120.pem --> /usr/share/ca-certificates/9120.pem (1338 bytes)
	I0318 12:43:21.094345    5712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\91202.pem --> /usr/share/ca-certificates/91202.pem (1708 bytes)
	I0318 12:43:21.143361    5712 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 12:43:21.191133    5712 ssh_runner.go:195] Run: openssl version
	I0318 12:43:21.201045    5712 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0318 12:43:21.215353    5712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/91202.pem && ln -fs /usr/share/ca-certificates/91202.pem /etc/ssl/certs/91202.pem"
	I0318 12:43:21.246370    5712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91202.pem
	I0318 12:43:21.253217    5712 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 18 10:53 /usr/share/ca-certificates/91202.pem
	I0318 12:43:21.253521    5712 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 10:53 /usr/share/ca-certificates/91202.pem
	I0318 12:43:21.266456    5712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91202.pem
	I0318 12:43:21.274981    5712 command_runner.go:130] > 3ec20f2e
	I0318 12:43:21.287389    5712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/91202.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 12:43:21.320914    5712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 12:43:21.354808    5712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:43:21.361560    5712 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 18 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:43:21.361560    5712 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:43:21.372553    5712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:43:21.381560    5712 command_runner.go:130] > b5213941
	I0318 12:43:21.395210    5712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 12:43:21.425910    5712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9120.pem && ln -fs /usr/share/ca-certificates/9120.pem /etc/ssl/certs/9120.pem"
	I0318 12:43:21.459783    5712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9120.pem
	I0318 12:43:21.466197    5712 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 18 10:53 /usr/share/ca-certificates/9120.pem
	I0318 12:43:21.466263    5712 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 10:53 /usr/share/ca-certificates/9120.pem
	I0318 12:43:21.478916    5712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9120.pem
	I0318 12:43:21.490949    5712 command_runner.go:130] > 51391683
	I0318 12:43:21.502945    5712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9120.pem /etc/ssl/certs/51391683.0"
	I0318 12:43:21.538323    5712 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 12:43:21.545300    5712 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 12:43:21.545300    5712 command_runner.go:130] >   Size: 1164      	Blocks: 8          IO Block: 4096   regular file
	I0318 12:43:21.545300    5712 command_runner.go:130] > Device: 8,1	Inode: 7336229     Links: 1
	I0318 12:43:21.545300    5712 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0318 12:43:21.545300    5712 command_runner.go:130] > Access: 2024-03-18 12:18:36.848805156 +0000
	I0318 12:43:21.545300    5712 command_runner.go:130] > Modify: 2024-03-18 12:18:36.848805156 +0000
	I0318 12:43:21.545300    5712 command_runner.go:130] > Change: 2024-03-18 12:18:36.848805156 +0000
	I0318 12:43:21.545300    5712 command_runner.go:130] >  Birth: 2024-03-18 12:18:36.848805156 +0000
	I0318 12:43:21.556310    5712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 12:43:21.565316    5712 command_runner.go:130] > Certificate will not expire
	I0318 12:43:21.578012    5712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 12:43:21.588017    5712 command_runner.go:130] > Certificate will not expire
	I0318 12:43:21.599030    5712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 12:43:21.609102    5712 command_runner.go:130] > Certificate will not expire
	I0318 12:43:21.621678    5712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 12:43:21.630595    5712 command_runner.go:130] > Certificate will not expire
	I0318 12:43:21.643316    5712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 12:43:21.652549    5712 command_runner.go:130] > Certificate will not expire
	I0318 12:43:21.665949    5712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 12:43:21.674949    5712 command_runner.go:130] > Certificate will not expire
	I0318 12:43:21.675302    5712 kubeadm.go:391] StartCluster: {Name:multinode-642600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.2
8.4 ClusterName:multinode-642600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.148.129 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.159.102 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.25.157.200 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress
-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 12:43:21.684960    5712 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 12:43:21.726150    5712 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0318 12:43:21.746132    5712 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0318 12:43:21.746364    5712 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0318 12:43:21.746423    5712 command_runner.go:130] > /var/lib/minikube/etcd:
	I0318 12:43:21.746423    5712 command_runner.go:130] > member
	W0318 12:43:21.746423    5712 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 12:43:21.746423    5712 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 12:43:21.746423    5712 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 12:43:21.758628    5712 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 12:43:21.778770    5712 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 12:43:21.779243    5712 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-642600" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0318 12:43:21.780215    5712 kubeconfig.go:62] C:\Users\jenkins.minikube6\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-642600" cluster setting kubeconfig missing "multinode-642600" context setting]
	I0318 12:43:21.781185    5712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:43:21.795191    5712 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0318 12:43:21.796236    5712 kapi.go:59] client config for multinode-642600: &rest.Config{Host:"https://172.25.148.129:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-642600/client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-642600/client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x226b2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 12:43:21.797199    5712 cert_rotation.go:137] Starting client certificate rotation controller
	I0318 12:43:21.810105    5712 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 12:43:21.828392    5712 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0318 12:43:21.828392    5712 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0318 12:43:21.828392    5712 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0318 12:43:21.828392    5712 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0318 12:43:21.828392    5712 command_runner.go:130] >  kind: InitConfiguration
	I0318 12:43:21.828392    5712 command_runner.go:130] >  localAPIEndpoint:
	I0318 12:43:21.828392    5712 command_runner.go:130] > -  advertiseAddress: 172.25.151.112
	I0318 12:43:21.828392    5712 command_runner.go:130] > +  advertiseAddress: 172.25.148.129
	I0318 12:43:21.828392    5712 command_runner.go:130] >    bindPort: 8443
	I0318 12:43:21.828392    5712 command_runner.go:130] >  bootstrapTokens:
	I0318 12:43:21.828392    5712 command_runner.go:130] >    - groups:
	I0318 12:43:21.828392    5712 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0318 12:43:21.828392    5712 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0318 12:43:21.828392    5712 command_runner.go:130] >    name: "multinode-642600"
	I0318 12:43:21.828392    5712 command_runner.go:130] >    kubeletExtraArgs:
	I0318 12:43:21.828392    5712 command_runner.go:130] > -    node-ip: 172.25.151.112
	I0318 12:43:21.828392    5712 command_runner.go:130] > +    node-ip: 172.25.148.129
	I0318 12:43:21.828392    5712 command_runner.go:130] >    taints: []
	I0318 12:43:21.828392    5712 command_runner.go:130] >  ---
	I0318 12:43:21.828392    5712 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0318 12:43:21.828392    5712 command_runner.go:130] >  kind: ClusterConfiguration
	I0318 12:43:21.828392    5712 command_runner.go:130] >  apiServer:
	I0318 12:43:21.828392    5712 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.25.151.112"]
	I0318 12:43:21.828392    5712 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.25.148.129"]
	I0318 12:43:21.828392    5712 command_runner.go:130] >    extraArgs:
	I0318 12:43:21.828392    5712 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0318 12:43:21.828392    5712 command_runner.go:130] >  controllerManager:
	I0318 12:43:21.828392    5712 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.25.151.112
	+  advertiseAddress: 172.25.148.129
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-642600"
	   kubeletExtraArgs:
	-    node-ip: 172.25.151.112
	+    node-ip: 172.25.148.129
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.25.151.112"]
	+  certSANs: ["127.0.0.1", "localhost", "172.25.148.129"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0318 12:43:21.828392    5712 kubeadm.go:1154] stopping kube-system containers ...
	I0318 12:43:21.838912    5712 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 12:43:21.871904    5712 command_runner.go:130] > e81f1d2fdb36
	I0318 12:43:21.872059    5712 command_runner.go:130] > ed38da653fbe
	I0318 12:43:21.872059    5712 command_runner.go:130] > 996fb0f2ade6
	I0318 12:43:21.872094    5712 command_runner.go:130] > 3a9b4c05a5cc
	I0318 12:43:21.872094    5712 command_runner.go:130] > 5cf42651cb21
	I0318 12:43:21.872094    5712 command_runner.go:130] > 4bbad08fe59a
	I0318 12:43:21.872094    5712 command_runner.go:130] > 2f4709a3a45a
	I0318 12:43:21.872094    5712 command_runner.go:130] > fef37141be6d
	I0318 12:43:21.872094    5712 command_runner.go:130] > 301c80f8b38c
	I0318 12:43:21.872094    5712 command_runner.go:130] > a54be4436901
	I0318 12:43:21.872094    5712 command_runner.go:130] > 47777d4c0b90
	I0318 12:43:21.872094    5712 command_runner.go:130] > 4b94d396876e
	I0318 12:43:21.872094    5712 command_runner.go:130] > f100b1062a56
	I0318 12:43:21.872094    5712 command_runner.go:130] > aad98ae0cd7c
	I0318 12:43:21.872094    5712 command_runner.go:130] > 3500a9f1ca84
	I0318 12:43:21.872094    5712 command_runner.go:130] > d766c4514f0b
	I0318 12:43:21.873080    5712 docker.go:483] Stopping containers: [e81f1d2fdb36 ed38da653fbe 996fb0f2ade6 3a9b4c05a5cc 5cf42651cb21 4bbad08fe59a 2f4709a3a45a fef37141be6d 301c80f8b38c a54be4436901 47777d4c0b90 4b94d396876e f100b1062a56 aad98ae0cd7c 3500a9f1ca84 d766c4514f0b]
	I0318 12:43:21.882579    5712 ssh_runner.go:195] Run: docker stop e81f1d2fdb36 ed38da653fbe 996fb0f2ade6 3a9b4c05a5cc 5cf42651cb21 4bbad08fe59a 2f4709a3a45a fef37141be6d 301c80f8b38c a54be4436901 47777d4c0b90 4b94d396876e f100b1062a56 aad98ae0cd7c 3500a9f1ca84 d766c4514f0b
	I0318 12:43:21.907375    5712 command_runner.go:130] > e81f1d2fdb36
	I0318 12:43:21.907375    5712 command_runner.go:130] > ed38da653fbe
	I0318 12:43:21.907375    5712 command_runner.go:130] > 996fb0f2ade6
	I0318 12:43:21.907375    5712 command_runner.go:130] > 3a9b4c05a5cc
	I0318 12:43:21.907375    5712 command_runner.go:130] > 5cf42651cb21
	I0318 12:43:21.907375    5712 command_runner.go:130] > 4bbad08fe59a
	I0318 12:43:21.907375    5712 command_runner.go:130] > 2f4709a3a45a
	I0318 12:43:21.907375    5712 command_runner.go:130] > fef37141be6d
	I0318 12:43:21.908281    5712 command_runner.go:130] > 301c80f8b38c
	I0318 12:43:21.908281    5712 command_runner.go:130] > a54be4436901
	I0318 12:43:21.908281    5712 command_runner.go:130] > 47777d4c0b90
	I0318 12:43:21.908281    5712 command_runner.go:130] > 4b94d396876e
	I0318 12:43:21.908281    5712 command_runner.go:130] > f100b1062a56
	I0318 12:43:21.908281    5712 command_runner.go:130] > aad98ae0cd7c
	I0318 12:43:21.908281    5712 command_runner.go:130] > 3500a9f1ca84
	I0318 12:43:21.908281    5712 command_runner.go:130] > d766c4514f0b
	I0318 12:43:21.922781    5712 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 12:43:21.964269    5712 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 12:43:21.982306    5712 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0318 12:43:21.982306    5712 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0318 12:43:21.982600    5712 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0318 12:43:21.982600    5712 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 12:43:21.982686    5712 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 12:43:21.982686    5712 kubeadm.go:156] found existing configuration files:
	
	I0318 12:43:21.995588    5712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 12:43:22.012574    5712 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 12:43:22.013798    5712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 12:43:22.025175    5712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 12:43:22.056252    5712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 12:43:22.073646    5712 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 12:43:22.076433    5712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 12:43:22.089597    5712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 12:43:22.121143    5712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 12:43:22.141200    5712 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 12:43:22.141411    5712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 12:43:22.154373    5712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 12:43:22.188829    5712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 12:43:22.211875    5712 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 12:43:22.212870    5712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 12:43:22.223862    5712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 12:43:22.253860    5712 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 12:43:22.271866    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 12:43:22.688838    5712 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 12:43:22.688921    5712 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0318 12:43:22.688921    5712 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0318 12:43:22.688921    5712 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 12:43:22.688921    5712 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0318 12:43:22.688921    5712 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0318 12:43:22.688921    5712 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0318 12:43:22.688921    5712 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0318 12:43:22.688921    5712 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0318 12:43:22.688921    5712 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 12:43:22.688921    5712 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 12:43:22.688921    5712 command_runner.go:130] > [certs] Using the existing "sa" key
	I0318 12:43:22.688921    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 12:43:23.659422    5712 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 12:43:23.659465    5712 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 12:43:23.659465    5712 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 12:43:23.659465    5712 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 12:43:23.659582    5712 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 12:43:23.659582    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 12:43:23.990576    5712 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 12:43:23.991551    5712 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 12:43:23.991551    5712 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0318 12:43:23.991551    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 12:43:24.092646    5712 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 12:43:24.092646    5712 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 12:43:24.092646    5712 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 12:43:24.092646    5712 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 12:43:24.092646    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 12:43:24.199590    5712 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 12:43:24.199741    5712 api_server.go:52] waiting for apiserver process to appear ...
	I0318 12:43:24.213821    5712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 12:43:24.719364    5712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 12:43:25.225456    5712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 12:43:25.718935    5712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 12:43:26.225662    5712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 12:43:26.274272    5712 command_runner.go:130] > 1997
	I0318 12:43:26.274465    5712 api_server.go:72] duration metric: took 2.0745976s to wait for apiserver process to appear ...
	I0318 12:43:26.274465    5712 api_server.go:88] waiting for apiserver healthz status ...
	I0318 12:43:26.274465    5712 api_server.go:253] Checking apiserver healthz at https://172.25.148.129:8443/healthz ...
	I0318 12:43:30.431160    5712 api_server.go:279] https://172.25.148.129:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 12:43:30.431258    5712 api_server.go:103] status: https://172.25.148.129:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 12:43:30.431258    5712 api_server.go:253] Checking apiserver healthz at https://172.25.148.129:8443/healthz ...
	I0318 12:43:30.509034    5712 api_server.go:279] https://172.25.148.129:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 12:43:30.509450    5712 api_server.go:103] status: https://172.25.148.129:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 12:43:30.779944    5712 api_server.go:253] Checking apiserver healthz at https://172.25.148.129:8443/healthz ...
	I0318 12:43:30.787142    5712 api_server.go:279] https://172.25.148.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 12:43:30.788009    5712 api_server.go:103] status: https://172.25.148.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 12:43:31.282368    5712 api_server.go:253] Checking apiserver healthz at https://172.25.148.129:8443/healthz ...
	I0318 12:43:31.294565    5712 api_server.go:279] https://172.25.148.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 12:43:31.294607    5712 api_server.go:103] status: https://172.25.148.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 12:43:31.775192    5712 api_server.go:253] Checking apiserver healthz at https://172.25.148.129:8443/healthz ...
	I0318 12:43:31.784000    5712 api_server.go:279] https://172.25.148.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 12:43:31.784571    5712 api_server.go:103] status: https://172.25.148.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 12:43:32.282336    5712 api_server.go:253] Checking apiserver healthz at https://172.25.148.129:8443/healthz ...
	I0318 12:43:32.292921    5712 api_server.go:279] https://172.25.148.129:8443/healthz returned 200:
	ok
	I0318 12:43:32.292921    5712 round_trippers.go:463] GET https://172.25.148.129:8443/version
	I0318 12:43:32.292921    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:32.292921    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:32.292921    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:32.307425    5712 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0318 12:43:32.308415    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:32.308454    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:32 GMT
	I0318 12:43:32.308454    5712 round_trippers.go:580]     Audit-Id: 43aceb14-36a6-4e46-b05e-76fe75a5153b
	I0318 12:43:32.308454    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:32.308454    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:32.308454    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:32.308454    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:32.308454    5712 round_trippers.go:580]     Content-Length: 264
	I0318 12:43:32.308454    5712 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0318 12:43:32.308454    5712 api_server.go:141] control plane version: v1.28.4
	I0318 12:43:32.308454    5712 api_server.go:131] duration metric: took 6.0339513s to wait for apiserver health ...
	I0318 12:43:32.308454    5712 cni.go:84] Creating CNI manager for ""
	I0318 12:43:32.308454    5712 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0318 12:43:32.312940    5712 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0318 12:43:32.329162    5712 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0318 12:43:32.338044    5712 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0318 12:43:32.338148    5712 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0318 12:43:32.338148    5712 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0318 12:43:32.338148    5712 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0318 12:43:32.338148    5712 command_runner.go:130] > Access: 2024-03-18 12:41:53.487737500 +0000
	I0318 12:43:32.338148    5712 command_runner.go:130] > Modify: 2024-03-15 22:00:10.000000000 +0000
	I0318 12:43:32.338148    5712 command_runner.go:130] > Change: 2024-03-18 12:41:44.149000000 +0000
	I0318 12:43:32.338262    5712 command_runner.go:130] >  Birth: -
	I0318 12:43:32.338262    5712 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0318 12:43:32.338262    5712 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0318 12:43:32.417885    5712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0318 12:43:34.168750    5712 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0318 12:43:34.168750    5712 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0318 12:43:34.168750    5712 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0318 12:43:34.168750    5712 command_runner.go:130] > daemonset.apps/kindnet configured
	I0318 12:43:34.168883    5712 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.7509869s)
	I0318 12:43:34.168883    5712 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 12:43:34.168883    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods
	I0318 12:43:34.168883    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:34.168883    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:34.168883    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:34.176093    5712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:43:34.176093    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:34.176093    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:34.176093    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:34.179465    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:34.179465    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:34 GMT
	I0318 12:43:34.179465    5712 round_trippers.go:580]     Audit-Id: 32f3e80b-ba51-4bef-8025-4405d3d75ffe
	I0318 12:43:34.179465    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:34.180947    5712 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1873"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83637 chars]
	I0318 12:43:34.187800    5712 system_pods.go:59] 12 kube-system pods found
	I0318 12:43:34.187800    5712 system_pods.go:61] "coredns-5dd5756b68-fgn7v" [7bc52797-b4bd-4046-b3d5-fae9c8ccd13b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 12:43:34.187800    5712 system_pods.go:61] "etcd-multinode-642600" [6f0ca14e-af4b-4442-8a48-28b69c699976] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 12:43:34.187800    5712 system_pods.go:61] "kindnet-d5llj" [caa4170d-6120-414a-950c-92a0380a70b8] Running
	I0318 12:43:34.187800    5712 system_pods.go:61] "kindnet-kpt4f" [acd9d7a0-0e27-4bbb-8562-6fbf374742ca] Running
	I0318 12:43:34.187800    5712 system_pods.go:61] "kindnet-thkjp" [a7e20c36-c1d1-4146-a66c-40448e1ae0e5] Running
	I0318 12:43:34.187800    5712 system_pods.go:61] "kube-apiserver-multinode-642600" [ab8e6b8b-cbac-4c90-8f57-9af2760ced9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 12:43:34.187800    5712 system_pods.go:61] "kube-controller-manager-multinode-642600" [1dd2a576-c5a0-44e5-b194-545e8b18962c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 12:43:34.187800    5712 system_pods.go:61] "kube-proxy-4dg79" [449242c2-ad12-4da5-b339-3be7ab8a9b16] Running
	I0318 12:43:34.187800    5712 system_pods.go:61] "kube-proxy-khbjt" [594efa46-7e30-40e6-92dd-9c9c80bc787a] Running
	I0318 12:43:34.187800    5712 system_pods.go:61] "kube-proxy-vts9f" [9545be8f-07fd-49dd-99bd-e9e976e65e7b] Running
	I0318 12:43:34.187800    5712 system_pods.go:61] "kube-scheduler-multinode-642600" [52e29d3b-d6e9-4109-916d-74123a2ab190] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 12:43:34.187800    5712 system_pods.go:61] "storage-provisioner" [d2718b8a-26a9-4c86-bf9a-221d1ee23ceb] Running
	I0318 12:43:34.187800    5712 system_pods.go:74] duration metric: took 18.9164ms to wait for pod list to return data ...
	I0318 12:43:34.187800    5712 node_conditions.go:102] verifying NodePressure condition ...
	I0318 12:43:34.188666    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes
	I0318 12:43:34.188666    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:34.188666    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:34.188666    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:34.193555    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:43:34.193601    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:34.193601    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:34.193601    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:34 GMT
	I0318 12:43:34.193601    5712 round_trippers.go:580]     Audit-Id: 86bd5076-fa9c-49aa-bbeb-584069174c5d
	I0318 12:43:34.193601    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:34.193601    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:34.193601    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:34.193601    5712 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1873"},"items":[{"metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1851","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15630 chars]
	I0318 12:43:34.195206    5712 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 12:43:34.195284    5712 node_conditions.go:123] node cpu capacity is 2
	I0318 12:43:34.195284    5712 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 12:43:34.195284    5712 node_conditions.go:123] node cpu capacity is 2
	I0318 12:43:34.195362    5712 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 12:43:34.195362    5712 node_conditions.go:123] node cpu capacity is 2
	I0318 12:43:34.195362    5712 node_conditions.go:105] duration metric: took 7.5625ms to run NodePressure ...
	I0318 12:43:34.195410    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 12:43:34.595136    5712 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0318 12:43:34.595196    5712 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0318 12:43:34.595540    5712 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 12:43:34.595776    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0318 12:43:34.595776    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:34.595776    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:34.595776    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:34.601199    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:43:34.601199    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:34.601199    5712 round_trippers.go:580]     Audit-Id: e7cb6862-af2e-4c11-a0d4-9872d3b79787
	I0318 12:43:34.601199    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:34.601199    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:34.601199    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:34.601199    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:34.601199    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:34 GMT
	I0318 12:43:34.604156    5712 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1875"},"items":[{"metadata":{"name":"etcd-multinode-642600","namespace":"kube-system","uid":"6f0ca14e-af4b-4442-8a48-28b69c699976","resourceVersion":"1860","creationTimestamp":"2024-03-18T12:43:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.148.129:2379","kubernetes.io/config.hash":"d5f09afee1a6ef36657c1ae3335ddda6","kubernetes.io/config.mirror":"d5f09afee1a6ef36657c1ae3335ddda6","kubernetes.io/config.seen":"2024-03-18T12:43:24.228249982Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:43:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 29377 chars]
	I0318 12:43:34.606092    5712 kubeadm.go:733] kubelet initialised
	I0318 12:43:34.606092    5712 kubeadm.go:734] duration metric: took 10.521ms waiting for restarted kubelet to initialise ...
	I0318 12:43:34.606173    5712 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 12:43:34.606316    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods
	I0318 12:43:34.606362    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:34.606362    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:34.606415    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:34.614356    5712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 12:43:34.614718    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:34.614718    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:34.614718    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:34 GMT
	I0318 12:43:34.614718    5712 round_trippers.go:580]     Audit-Id: 5a59db11-1d14-49b2-9598-cc4448f68a56
	I0318 12:43:34.614718    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:34.614718    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:34.614718    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:34.616859    5712 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1875"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83637 chars]
	I0318 12:43:34.621279    5712 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-fgn7v" in "kube-system" namespace to be "Ready" ...
	I0318 12:43:34.621279    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:43:34.621279    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:34.621279    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:34.621279    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:34.625415    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:34.625415    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:34.626307    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:34.626307    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:34.626307    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:34.626307    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:34 GMT
	I0318 12:43:34.626362    5712 round_trippers.go:580]     Audit-Id: 32630a99-0a82-4ba2-961e-18364b55e578
	I0318 12:43:34.626362    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:34.626501    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:43:34.626792    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:34.626792    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:34.626792    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:34.626792    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:34.631258    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:34.631317    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:34.631317    5712 round_trippers.go:580]     Audit-Id: 6a36dae6-d08a-4c3d-b097-6b4f469a7a34
	I0318 12:43:34.631317    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:34.631317    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:34.631317    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:34.631317    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:34.631317    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:34 GMT
	I0318 12:43:34.631715    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1851","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 12:43:34.632096    5712 pod_ready.go:97] node "multinode-642600" hosting pod "coredns-5dd5756b68-fgn7v" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-642600" has status "Ready":"False"
	I0318 12:43:34.632096    5712 pod_ready.go:81] duration metric: took 10.8168ms for pod "coredns-5dd5756b68-fgn7v" in "kube-system" namespace to be "Ready" ...
	E0318 12:43:34.632096    5712 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-642600" hosting pod "coredns-5dd5756b68-fgn7v" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-642600" has status "Ready":"False"
	I0318 12:43:34.632096    5712 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-642600" in "kube-system" namespace to be "Ready" ...
	I0318 12:43:34.632096    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-642600
	I0318 12:43:34.632096    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:34.632096    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:34.632096    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:34.636843    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:43:34.636843    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:34.636899    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:34.636899    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:34 GMT
	I0318 12:43:34.636899    5712 round_trippers.go:580]     Audit-Id: a6e13cb6-e867-4d6d-b4c3-c8f8002385b3
	I0318 12:43:34.636899    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:34.636899    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:34.636899    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:34.637391    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-642600","namespace":"kube-system","uid":"6f0ca14e-af4b-4442-8a48-28b69c699976","resourceVersion":"1860","creationTimestamp":"2024-03-18T12:43:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.148.129:2379","kubernetes.io/config.hash":"d5f09afee1a6ef36657c1ae3335ddda6","kubernetes.io/config.mirror":"d5f09afee1a6ef36657c1ae3335ddda6","kubernetes.io/config.seen":"2024-03-18T12:43:24.228249982Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:43:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6097 chars]
	I0318 12:43:34.637928    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:34.637992    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:34.637992    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:34.637992    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:34.641708    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:43:34.642009    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:34.642009    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:34.642009    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:34.642009    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:34.642009    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:34 GMT
	I0318 12:43:34.642009    5712 round_trippers.go:580]     Audit-Id: 539f5d1c-5dd2-47d6-bf00-d44564ec3cc9
	I0318 12:43:34.642009    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:34.642352    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1851","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 12:43:34.642733    5712 pod_ready.go:97] node "multinode-642600" hosting pod "etcd-multinode-642600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-642600" has status "Ready":"False"
	I0318 12:43:34.642733    5712 pod_ready.go:81] duration metric: took 10.6374ms for pod "etcd-multinode-642600" in "kube-system" namespace to be "Ready" ...
	E0318 12:43:34.642733    5712 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-642600" hosting pod "etcd-multinode-642600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-642600" has status "Ready":"False"
	I0318 12:43:34.642733    5712 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-642600" in "kube-system" namespace to be "Ready" ...
	I0318 12:43:34.642837    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-642600
	I0318 12:43:34.642921    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:34.642921    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:34.642921    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:34.647401    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:34.647401    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:34.647401    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:34.647401    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:34 GMT
	I0318 12:43:34.647401    5712 round_trippers.go:580]     Audit-Id: 93f0c8ff-95f3-4443-8db5-c3b07d3342ca
	I0318 12:43:34.647401    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:34.647401    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:34.647401    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:34.647401    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-642600","namespace":"kube-system","uid":"ab8e6b8b-cbac-4c90-8f57-9af2760ced9c","resourceVersion":"1861","creationTimestamp":"2024-03-18T12:43:31Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.25.148.129:8443","kubernetes.io/config.hash":"624de65f019baf96d4a0e2fb6064e413","kubernetes.io/config.mirror":"624de65f019baf96d4a0e2fb6064e413","kubernetes.io/config.seen":"2024-03-18T12:43:24.228255882Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:43:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7653 chars]
	I0318 12:43:34.648073    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:34.648073    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:34.648073    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:34.648073    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:34.652650    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:34.652650    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:34.652650    5712 round_trippers.go:580]     Audit-Id: c379ac6e-d821-4bef-a43b-6da103b3f147
	I0318 12:43:34.652650    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:34.652650    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:34.652650    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:34.652650    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:34.652650    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:34 GMT
	I0318 12:43:34.653281    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1851","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 12:43:34.653892    5712 pod_ready.go:97] node "multinode-642600" hosting pod "kube-apiserver-multinode-642600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-642600" has status "Ready":"False"
	I0318 12:43:34.653892    5712 pod_ready.go:81] duration metric: took 11.1586ms for pod "kube-apiserver-multinode-642600" in "kube-system" namespace to be "Ready" ...
	E0318 12:43:34.653892    5712 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-642600" hosting pod "kube-apiserver-multinode-642600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-642600" has status "Ready":"False"
	I0318 12:43:34.653892    5712 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-642600" in "kube-system" namespace to be "Ready" ...
	I0318 12:43:34.653892    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-642600
	I0318 12:43:34.653892    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:34.653892    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:34.653892    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:34.656485    5712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:43:34.657492    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:34.657492    5712 round_trippers.go:580]     Audit-Id: db91fcad-fcf3-49a4-92a3-30bfb50ee2e8
	I0318 12:43:34.657536    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:34.657536    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:34.657536    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:34.657571    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:34.657598    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:34 GMT
	I0318 12:43:34.657598    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-642600","namespace":"kube-system","uid":"1dd2a576-c5a0-44e5-b194-545e8b18962c","resourceVersion":"1855","creationTimestamp":"2024-03-18T12:18:51Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a1608bc774d0b3e96e1b6fbbded5cb52","kubernetes.io/config.mirror":"a1608bc774d0b3e96e1b6fbbded5cb52","kubernetes.io/config.seen":"2024-03-18T12:18:50.896437006Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:18:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7441 chars]
	I0318 12:43:34.658568    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:34.658568    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:34.658633    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:34.658633    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:34.661456    5712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:43:34.661456    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:34.662173    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:34.662173    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:34.662173    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:34.662173    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:34.662173    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:34 GMT
	I0318 12:43:34.662173    5712 round_trippers.go:580]     Audit-Id: 639b0277-1310-4284-ae95-88d321fdf886
	I0318 12:43:34.662173    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1851","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 12:43:34.662815    5712 pod_ready.go:97] node "multinode-642600" hosting pod "kube-controller-manager-multinode-642600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-642600" has status "Ready":"False"
	I0318 12:43:34.662815    5712 pod_ready.go:81] duration metric: took 8.9226ms for pod "kube-controller-manager-multinode-642600" in "kube-system" namespace to be "Ready" ...
	E0318 12:43:34.662815    5712 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-642600" hosting pod "kube-controller-manager-multinode-642600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-642600" has status "Ready":"False"
	I0318 12:43:34.662815    5712 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4dg79" in "kube-system" namespace to be "Ready" ...
	I0318 12:43:34.796545    5712 request.go:629] Waited for 133.579ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4dg79
	I0318 12:43:34.796733    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4dg79
	I0318 12:43:34.796733    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:34.796841    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:34.796841    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:34.800297    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:43:34.800297    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:34.801285    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:34 GMT
	I0318 12:43:34.801285    5712 round_trippers.go:580]     Audit-Id: decc170d-406e-4991-baf1-6d51a48af9dd
	I0318 12:43:34.801332    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:34.801332    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:34.801332    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:34.801332    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:34.801606    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4dg79","generateName":"kube-proxy-","namespace":"kube-system","uid":"449242c2-ad12-4da5-b339-3be7ab8a9b16","resourceVersion":"1871","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"158ddb85-85d3-4864-bdec-d4555b6c7434","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"158ddb85-85d3-4864-bdec-d4555b6c7434\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5743 chars]
	I0318 12:43:34.999201    5712 request.go:629] Waited for 196.6983ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:34.999510    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:34.999510    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:34.999565    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:34.999565    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:35.004063    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:35.004144    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:35.004144    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:35.004144    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:35 GMT
	I0318 12:43:35.004144    5712 round_trippers.go:580]     Audit-Id: 74d36358-d5a0-4814-8c39-2ffb5316860e
	I0318 12:43:35.004223    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:35.004223    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:35.004223    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:35.004443    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1851","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 12:43:35.004713    5712 pod_ready.go:97] node "multinode-642600" hosting pod "kube-proxy-4dg79" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-642600" has status "Ready":"False"
	I0318 12:43:35.004713    5712 pod_ready.go:81] duration metric: took 341.8961ms for pod "kube-proxy-4dg79" in "kube-system" namespace to be "Ready" ...
	E0318 12:43:35.005180    5712 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-642600" hosting pod "kube-proxy-4dg79" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-642600" has status "Ready":"False"
	I0318 12:43:35.005180    5712 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-khbjt" in "kube-system" namespace to be "Ready" ...
	I0318 12:43:35.202176    5712 request.go:629] Waited for 196.8189ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/kube-proxy-khbjt
	I0318 12:43:35.202363    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/kube-proxy-khbjt
	I0318 12:43:35.202363    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:35.202363    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:35.202493    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:35.206378    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:43:35.206787    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:35.206787    5712 round_trippers.go:580]     Audit-Id: 17a37ac7-71ff-4ad2-b99c-8df5e67c24b5
	I0318 12:43:35.206787    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:35.206787    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:35.206787    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:35.206787    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:35.206787    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:35 GMT
	I0318 12:43:35.207490    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-khbjt","generateName":"kube-proxy-","namespace":"kube-system","uid":"594efa46-7e30-40e6-92dd-9c9c80bc787a","resourceVersion":"1825","creationTimestamp":"2024-03-18T12:27:09Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"158ddb85-85d3-4864-bdec-d4555b6c7434","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:27:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"158ddb85-85d3-4864-bdec-d4555b6c7434\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5771 chars]
	I0318 12:43:35.404715    5712 request.go:629] Waited for 196.2004ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.129:8443/api/v1/nodes/multinode-642600-m03
	I0318 12:43:35.404916    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600-m03
	I0318 12:43:35.405086    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:35.405086    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:35.405086    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:35.409419    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:35.409419    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:35.409419    5712 round_trippers.go:580]     Audit-Id: e383b8c1-6ac7-4db0-8b81-3bf40b1567ef
	I0318 12:43:35.409419    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:35.409419    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:35.409419    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:35.409419    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:35.409419    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:35 GMT
	I0318 12:43:35.410078    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m03","uid":"e9bc5257-e8c0-493d-a533-c2a8a832d45e","resourceVersion":"1838","creationTimestamp":"2024-03-18T12:38:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_38_47_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:38:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4400 chars]
	I0318 12:43:35.410556    5712 pod_ready.go:97] node "multinode-642600-m03" hosting pod "kube-proxy-khbjt" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-642600-m03" has status "Ready":"Unknown"
	I0318 12:43:35.410556    5712 pod_ready.go:81] duration metric: took 405.3732ms for pod "kube-proxy-khbjt" in "kube-system" namespace to be "Ready" ...
	E0318 12:43:35.410556    5712 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-642600-m03" hosting pod "kube-proxy-khbjt" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-642600-m03" has status "Ready":"Unknown"
	I0318 12:43:35.410556    5712 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vts9f" in "kube-system" namespace to be "Ready" ...
	I0318 12:43:35.607219    5712 request.go:629] Waited for 196.6621ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vts9f
	I0318 12:43:35.607653    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vts9f
	I0318 12:43:35.607732    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:35.607732    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:35.607769    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:35.612070    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:35.612070    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:35.612070    5712 round_trippers.go:580]     Audit-Id: b4a4b5a0-8bf9-40db-b99c-8d58aaad0f5d
	I0318 12:43:35.612455    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:35.612455    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:35.612455    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:35.612455    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:35.612455    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:35 GMT
	I0318 12:43:35.612544    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vts9f","generateName":"kube-proxy-","namespace":"kube-system","uid":"9545be8f-07fd-49dd-99bd-e9e976e65e7b","resourceVersion":"648","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"158ddb85-85d3-4864-bdec-d4555b6c7434","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"158ddb85-85d3-4864-bdec-d4555b6c7434\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5545 chars]
	I0318 12:43:35.810625    5712 request.go:629] Waited for 197.022ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.129:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:43:35.810625    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:43:35.810892    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:35.810892    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:35.810892    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:35.816058    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:35.816058    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:35.816058    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:35.816058    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:35 GMT
	I0318 12:43:35.816058    5712 round_trippers.go:580]     Audit-Id: 311f0653-c49b-4648-93f2-a9baa5c4aa02
	I0318 12:43:35.816058    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:35.816058    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:35.816058    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:35.816058    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"1670","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3827 chars]
	I0318 12:43:35.816890    5712 pod_ready.go:92] pod "kube-proxy-vts9f" in "kube-system" namespace has status "Ready":"True"
	I0318 12:43:35.816956    5712 pod_ready.go:81] duration metric: took 406.3313ms for pod "kube-proxy-vts9f" in "kube-system" namespace to be "Ready" ...
	I0318 12:43:35.816956    5712 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-642600" in "kube-system" namespace to be "Ready" ...
	I0318 12:43:35.998359    5712 request.go:629] Waited for 181.2964ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-642600
	I0318 12:43:35.998496    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-642600
	I0318 12:43:35.998496    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:35.998496    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:35.998496    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:36.002968    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:36.003701    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:36.003701    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:36.003701    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:36.003701    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:36.003701    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:36 GMT
	I0318 12:43:36.003701    5712 round_trippers.go:580]     Audit-Id: dc7c2592-b25a-4f2c-b8e1-33f1f01c8d3e
	I0318 12:43:36.003701    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:36.003981    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-642600","namespace":"kube-system","uid":"52e29d3b-d6e9-4109-916d-74123a2ab190","resourceVersion":"1857","creationTimestamp":"2024-03-18T12:18:51Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cf50844b540be8ed0b3e767db413ac8f","kubernetes.io/config.mirror":"cf50844b540be8ed0b3e767db413ac8f","kubernetes.io/config.seen":"2024-03-18T12:18:50.896438106Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:18:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5153 chars]
	I0318 12:43:36.202872    5712 request.go:629] Waited for 198.2864ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:36.203254    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:36.203254    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:36.203339    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:36.203339    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:36.206259    5712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:43:36.206738    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:36.206738    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:36.206738    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:36.206738    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:36 GMT
	I0318 12:43:36.206738    5712 round_trippers.go:580]     Audit-Id: db7e6b8a-01e5-432d-ab83-31156aae76bd
	I0318 12:43:36.206738    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:36.206738    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:36.207728    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1851","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 12:43:36.208289    5712 pod_ready.go:97] node "multinode-642600" hosting pod "kube-scheduler-multinode-642600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-642600" has status "Ready":"False"
	I0318 12:43:36.208421    5712 pod_ready.go:81] duration metric: took 391.4621ms for pod "kube-scheduler-multinode-642600" in "kube-system" namespace to be "Ready" ...
	E0318 12:43:36.208421    5712 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-642600" hosting pod "kube-scheduler-multinode-642600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-642600" has status "Ready":"False"
	I0318 12:43:36.208421    5712 pod_ready.go:38] duration metric: took 1.6022375s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 12:43:36.208421    5712 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 12:43:36.230536    5712 command_runner.go:130] > -16
	I0318 12:43:36.230614    5712 ops.go:34] apiserver oom_adj: -16
	I0318 12:43:36.230614    5712 kubeadm.go:591] duration metric: took 14.4841002s to restartPrimaryControlPlane
	I0318 12:43:36.230734    5712 kubeadm.go:393] duration metric: took 14.5553123s to StartCluster
	I0318 12:43:36.230734    5712 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:43:36.230979    5712 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0318 12:43:36.232563    5712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:43:36.234442    5712 start.go:234] Will wait 6m0s for node &{Name: IP:172.25.148.129 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 12:43:36.234442    5712 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 12:43:36.234775    5712 config.go:182] Loaded profile config "multinode-642600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 12:43:36.252883    5712 out.go:177] * Verifying Kubernetes components...
	I0318 12:43:36.259375    5712 out.go:177] * Enabled addons: 
	I0318 12:43:36.261674    5712 addons.go:505] duration metric: took 27.232ms for enable addons: enabled=[]
	I0318 12:43:36.271947    5712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:43:36.622755    5712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 12:43:36.657427    5712 node_ready.go:35] waiting up to 6m0s for node "multinode-642600" to be "Ready" ...
	I0318 12:43:36.657775    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:36.657775    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:36.657828    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:36.657828    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:36.661755    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:43:36.661755    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:36.661755    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:36.661755    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:36.661755    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:36 GMT
	I0318 12:43:36.661755    5712 round_trippers.go:580]     Audit-Id: 87fb7c74-6cf5-46ff-abe1-017bf2f0811b
	I0318 12:43:36.661755    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:36.661755    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:36.662762    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1851","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 12:43:37.169346    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:37.169346    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:37.169462    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:37.169462    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:37.174300    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:37.174300    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:37.174300    5712 round_trippers.go:580]     Audit-Id: 50a4149e-e944-4e64-a28c-82f882c3ea09
	I0318 12:43:37.174300    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:37.174300    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:37.174382    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:37.174382    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:37.174382    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:37 GMT
	I0318 12:43:37.174666    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1851","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 12:43:37.668041    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:37.668290    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:37.668290    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:37.668290    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:37.672053    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:43:37.672053    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:37.673044    5712 round_trippers.go:580]     Audit-Id: e55d01fb-974b-47a9-8954-0055cfc16a76
	I0318 12:43:37.673044    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:37.673069    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:37.673069    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:37.673069    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:37.673069    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:37 GMT
	I0318 12:43:37.673402    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1851","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 12:43:38.171467    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:38.171760    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:38.171760    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:38.171913    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:38.175560    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:43:38.175560    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:38.177184    5712 round_trippers.go:580]     Audit-Id: f6da97ee-f3f3-4de5-a853-42e211405ce1
	I0318 12:43:38.177184    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:38.177225    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:38.177225    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:38.177225    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:38.177225    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:38 GMT
	I0318 12:43:38.177694    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1851","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 12:43:38.669564    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:38.669564    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:38.669564    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:38.669564    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:38.674181    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:38.674181    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:38.674615    5712 round_trippers.go:580]     Audit-Id: 8d5f4768-48d9-433e-b712-b354ae2572da
	I0318 12:43:38.674615    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:38.674615    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:38.674615    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:38.674615    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:38.674615    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:38 GMT
	I0318 12:43:38.675324    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1851","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 12:43:38.675915    5712 node_ready.go:53] node "multinode-642600" has status "Ready":"False"
	I0318 12:43:39.169336    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:39.169408    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:39.169408    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:39.169408    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:39.173847    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:39.173847    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:39.173847    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:39 GMT
	I0318 12:43:39.173847    5712 round_trippers.go:580]     Audit-Id: b9cb5316-bd68-45d4-aac1-a671e6019340
	I0318 12:43:39.173847    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:39.174063    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:39.174063    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:39.174063    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:39.174457    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1851","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 12:43:39.669846    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:39.670176    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:39.670176    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:39.670176    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:39.675779    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:43:39.675779    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:39.675779    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:39.675779    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:39.675779    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:39.675779    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:39.675779    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:39 GMT
	I0318 12:43:39.675779    5712 round_trippers.go:580]     Audit-Id: 2a34cdf3-f8fc-43d1-83e6-ed6142e2751f
	I0318 12:43:39.676374    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1851","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 12:43:40.168133    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:40.168133    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:40.168216    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:40.168216    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:40.175565    5712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 12:43:40.175565    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:40.175565    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:40 GMT
	I0318 12:43:40.175565    5712 round_trippers.go:580]     Audit-Id: 84f538ff-d286-4938-a41d-328638b08475
	I0318 12:43:40.175945    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:40.175997    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:40.175997    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:40.175997    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:40.176224    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1851","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 12:43:40.668332    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:40.668398    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:40.668398    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:40.668398    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:40.672315    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:43:40.672315    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:40.672315    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:40.672315    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:40 GMT
	I0318 12:43:40.672315    5712 round_trippers.go:580]     Audit-Id: bb481d73-8c93-4a5b-a328-a905bbcbfc3d
	I0318 12:43:40.672315    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:40.672315    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:40.672315    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:40.672315    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1851","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 12:43:41.167865    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:41.167865    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:41.168096    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:41.168096    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:41.173010    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:41.173814    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:41.173814    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:41.173814    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:41 GMT
	I0318 12:43:41.173814    5712 round_trippers.go:580]     Audit-Id: 5394d356-fac2-42f0-a02c-00f7b2337e5f
	I0318 12:43:41.173814    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:41.173814    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:41.173814    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:41.173814    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1851","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 12:43:41.174640    5712 node_ready.go:53] node "multinode-642600" has status "Ready":"False"
	I0318 12:43:41.667620    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:41.667620    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:41.667700    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:41.667700    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:41.672058    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:41.672651    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:41.672651    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:41 GMT
	I0318 12:43:41.672651    5712 round_trippers.go:580]     Audit-Id: 555e16c9-70bd-4af7-8403-db7dde816310
	I0318 12:43:41.672651    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:41.672651    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:41.672651    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:41.672651    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:41.672917    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1851","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 12:43:42.166448    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:42.166519    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:42.166519    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:42.166519    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:42.170813    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:42.170813    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:42.170813    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:42.170813    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:42.171206    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:42.171206    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:42.171206    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:42 GMT
	I0318 12:43:42.171206    5712 round_trippers.go:580]     Audit-Id: 6568f48c-bd14-4067-a18d-cb238284ce74
	I0318 12:43:42.171554    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1851","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 12:43:42.666260    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:42.666260    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:42.666344    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:42.666344    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:42.670683    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:42.670683    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:42.670898    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:42.670898    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:42.670961    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:42.670961    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:42.670961    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:42 GMT
	I0318 12:43:42.670961    5712 round_trippers.go:580]     Audit-Id: 54be4d49-ce73-478f-a6ab-7be0188c71af
	I0318 12:43:42.671057    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1851","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 12:43:43.164942    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:43.164942    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:43.164942    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:43.164942    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:43.169630    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:43.169630    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:43.170038    5712 round_trippers.go:580]     Audit-Id: df0e439a-ebfe-4b43-a800-dad5516e9465
	I0318 12:43:43.170038    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:43.170038    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:43.170038    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:43.170038    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:43.170038    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:43 GMT
	I0318 12:43:43.170410    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1851","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 12:43:43.667676    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:43.667732    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:43.667732    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:43.667732    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:43.671391    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:43:43.671391    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:43.671391    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:43.671391    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:43.671391    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:43 GMT
	I0318 12:43:43.671391    5712 round_trippers.go:580]     Audit-Id: f812b4b8-f00b-4242-89e2-9f6535b60454
	I0318 12:43:43.671391    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:43.671391    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:43.671391    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:43.672542    5712 node_ready.go:53] node "multinode-642600" has status "Ready":"False"
	I0318 12:43:44.169018    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:44.169084    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:44.169084    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:44.169084    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:44.173851    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:44.173851    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:44.174049    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:44 GMT
	I0318 12:43:44.174049    5712 round_trippers.go:580]     Audit-Id: 28931d4c-4236-4207-a7a1-ec6c251b13b9
	I0318 12:43:44.174049    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:44.174049    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:44.174049    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:44.174049    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:44.174573    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:44.672676    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:44.672676    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:44.672676    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:44.672676    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:44.676120    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:43:44.676463    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:44.676463    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:44 GMT
	I0318 12:43:44.676463    5712 round_trippers.go:580]     Audit-Id: 4fd437ce-cd21-47ce-8d45-96fe2cca4950
	I0318 12:43:44.676463    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:44.676463    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:44.676463    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:44.676463    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:44.676952    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:45.158939    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:45.158939    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:45.158939    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:45.158939    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:45.164575    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:43:45.164575    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:45.164575    5712 round_trippers.go:580]     Audit-Id: 053f2122-1224-41ba-ae01-94754d5ab6c7
	I0318 12:43:45.164575    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:45.164575    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:45.164575    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:45.164691    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:45.164691    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:45 GMT
	I0318 12:43:45.164961    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:45.662331    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:45.662331    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:45.662331    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:45.662331    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:45.667927    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:43:45.668006    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:45.668006    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:45.668006    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:45 GMT
	I0318 12:43:45.668006    5712 round_trippers.go:580]     Audit-Id: bb4f3337-efeb-4bde-bb59-1faa7853281f
	I0318 12:43:45.668006    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:45.668006    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:45.668006    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:45.668316    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:46.158234    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:46.158425    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:46.158425    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:46.158425    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:46.166533    5712 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 12:43:46.166533    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:46.166533    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:46.166533    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:46.166533    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:46.166533    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:46 GMT
	I0318 12:43:46.166533    5712 round_trippers.go:580]     Audit-Id: 5c78fcb5-4079-4939-a3b2-883b77ed2b76
	I0318 12:43:46.166533    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:46.167680    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:46.168224    5712 node_ready.go:53] node "multinode-642600" has status "Ready":"False"
	I0318 12:43:46.661537    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:46.661689    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:46.661689    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:46.661781    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:46.665138    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:43:46.665138    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:46.665138    5712 round_trippers.go:580]     Audit-Id: 4d8ac18f-15ce-48d5-b368-ae69e0a81030
	I0318 12:43:46.665138    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:46.665138    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:46.665138    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:46.665138    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:46.666214    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:46 GMT
	I0318 12:43:46.666404    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:47.162055    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:47.162138    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:47.162138    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:47.162138    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:47.168042    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:43:47.168042    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:47.168042    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:47.168042    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:47.168042    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:47.168042    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:47.168042    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:47 GMT
	I0318 12:43:47.168042    5712 round_trippers.go:580]     Audit-Id: 161d3d6a-5e53-4618-8fd8-4737398dcb18
	I0318 12:43:47.168042    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:47.665742    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:47.665742    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:47.665742    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:47.665742    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:47.671798    5712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:43:47.671798    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:47.671798    5712 round_trippers.go:580]     Audit-Id: 2ebfc400-c4be-46e0-ac7f-f37c29495c7c
	I0318 12:43:47.671798    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:47.671798    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:47.671798    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:47.671798    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:47.671798    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:47 GMT
	I0318 12:43:47.671798    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:48.169986    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:48.170244    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:48.170244    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:48.170244    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:48.175065    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:48.175065    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:48.175149    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:48.175149    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:48.175149    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:48.175149    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:48 GMT
	I0318 12:43:48.175149    5712 round_trippers.go:580]     Audit-Id: 397a45e5-2cbf-43e4-9444-436e2b4bf965
	I0318 12:43:48.175149    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:48.176215    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:48.176215    5712 node_ready.go:53] node "multinode-642600" has status "Ready":"False"
	I0318 12:43:48.671247    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:48.671247    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:48.671247    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:48.671247    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:48.675849    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:48.676113    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:48.676113    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:48 GMT
	I0318 12:43:48.676186    5712 round_trippers.go:580]     Audit-Id: edce76cf-b374-48e6-92d4-a7ee22289096
	I0318 12:43:48.676186    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:48.676186    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:48.676186    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:48.676186    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:48.676249    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:49.170409    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:49.170494    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:49.170494    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:49.170494    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:49.175368    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:49.175612    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:49.175612    5712 round_trippers.go:580]     Audit-Id: 5551d1fc-bd82-4f98-b17c-f96116e78ebd
	I0318 12:43:49.175612    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:49.175612    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:49.175682    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:49.175738    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:49.181524    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:49 GMT
	I0318 12:43:49.181890    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:49.668925    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:49.669126    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:49.669126    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:49.669126    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:49.675026    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:43:49.675026    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:49.675026    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:49.675026    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:49.675026    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:49.675026    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:49 GMT
	I0318 12:43:49.675026    5712 round_trippers.go:580]     Audit-Id: cac1e2be-820c-4636-91ac-af38cbdd2b7a
	I0318 12:43:49.675026    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:49.675712    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:50.165361    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:50.165361    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:50.165361    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:50.165361    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:50.169951    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:50.169951    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:50.169951    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:50.169951    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:50.169951    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:50 GMT
	I0318 12:43:50.169951    5712 round_trippers.go:580]     Audit-Id: bc032467-d888-45b8-a3ff-cdaa0ec86ea0
	I0318 12:43:50.170285    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:50.170285    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:50.170633    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:50.666911    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:50.666911    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:50.666911    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:50.666911    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:50.670475    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:43:50.670475    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:50.670475    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:50 GMT
	I0318 12:43:50.671245    5712 round_trippers.go:580]     Audit-Id: 8c5f782c-703c-473e-aea6-23f07efcb29a
	I0318 12:43:50.671245    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:50.671245    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:50.671245    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:50.671245    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:50.671401    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:50.671984    5712 node_ready.go:53] node "multinode-642600" has status "Ready":"False"
	I0318 12:43:51.166483    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:51.166719    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:51.166719    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:51.166719    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:51.173566    5712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:43:51.173566    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:51.173566    5712 round_trippers.go:580]     Audit-Id: 79c3dd9b-0dc4-4389-b982-c990ebc68b1f
	I0318 12:43:51.173566    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:51.173566    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:51.173566    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:51.173566    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:51.173566    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:51 GMT
	I0318 12:43:51.174160    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:51.666702    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:51.666702    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:51.666702    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:51.666702    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:51.671139    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:51.671514    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:51.671514    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:51.671514    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:51.671514    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:51 GMT
	I0318 12:43:51.671514    5712 round_trippers.go:580]     Audit-Id: d7d1e139-04b9-4dfc-8ca5-67a5d11754a6
	I0318 12:43:51.671514    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:51.671514    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:51.671817    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:52.165682    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:52.165918    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:52.165918    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:52.165918    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:52.173492    5712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 12:43:52.173492    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:52.173492    5712 round_trippers.go:580]     Audit-Id: c0cfb54c-d963-41b4-89b0-f38b8b8a26f1
	I0318 12:43:52.173492    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:52.173492    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:52.173492    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:52.173492    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:52.173492    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:52 GMT
	I0318 12:43:52.174184    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:52.663849    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:52.663849    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:52.663849    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:52.663849    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:52.669926    5712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:43:52.669926    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:52.669926    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:52.669926    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:52.669926    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:52.669926    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:52.669926    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:52 GMT
	I0318 12:43:52.669926    5712 round_trippers.go:580]     Audit-Id: 60a9cca1-02b2-473e-a566-5ab730457c66
	I0318 12:43:52.670629    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:53.161583    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:53.161583    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:53.161583    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:53.161583    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:53.165384    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:43:53.165384    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:53.165384    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:53.165384    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:53.165384    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:53.165384    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:53 GMT
	I0318 12:43:53.165384    5712 round_trippers.go:580]     Audit-Id: c17c0652-4e63-4448-819b-acf4a87a554d
	I0318 12:43:53.165384    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:53.165384    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:53.165384    5712 node_ready.go:53] node "multinode-642600" has status "Ready":"False"
	I0318 12:43:53.661918    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:53.662000    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:53.662000    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:53.662000    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:53.665448    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:43:53.666305    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:53.666305    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:53.666305    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:53.666305    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:53 GMT
	I0318 12:43:53.666305    5712 round_trippers.go:580]     Audit-Id: 5d5e4360-1a5e-49b0-ae56-832f71c8d1d2
	I0318 12:43:53.666305    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:53.666305    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:53.666543    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:54.162060    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:54.162060    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:54.162060    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:54.162060    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:54.166540    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:54.166540    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:54.166540    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:54.166540    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:54.166540    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:54.166540    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:54.166540    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:54 GMT
	I0318 12:43:54.166540    5712 round_trippers.go:580]     Audit-Id: bbedea72-b620-4b33-93be-fef37cf29dce
	I0318 12:43:54.166540    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:54.661910    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:54.662007    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:54.662086    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:54.662086    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:54.669002    5712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:43:54.669002    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:54.669002    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:54.669002    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:54.669002    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:54 GMT
	I0318 12:43:54.669002    5712 round_trippers.go:580]     Audit-Id: f1b86ded-4167-413e-af24-1ab28201017f
	I0318 12:43:54.669002    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:54.669002    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:54.669002    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:55.162685    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:55.162961    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:55.162961    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:55.162961    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:55.167303    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:55.167457    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:55.167457    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:55.167457    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:55.167457    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:55.167457    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:55.167457    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:55 GMT
	I0318 12:43:55.167457    5712 round_trippers.go:580]     Audit-Id: 4621dbda-9972-451d-96da-d83a1ff8ee5a
	I0318 12:43:55.167879    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:55.168384    5712 node_ready.go:53] node "multinode-642600" has status "Ready":"False"
	I0318 12:43:55.663999    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:55.664100    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:55.664100    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:55.664100    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:55.668466    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:55.668466    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:55.668466    5712 round_trippers.go:580]     Audit-Id: ac6e8358-2ac8-4c22-bb43-e335adc4a9ac
	I0318 12:43:55.668466    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:55.668466    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:55.668466    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:55.668466    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:55.668466    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:55 GMT
	I0318 12:43:55.669317    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:56.162622    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:56.162697    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:56.162697    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:56.162910    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:56.167973    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:43:56.168035    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:56.168035    5712 round_trippers.go:580]     Audit-Id: efc38058-b725-488d-a35d-0924fc3cf052
	I0318 12:43:56.168035    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:56.168035    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:56.168035    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:56.168035    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:56.168035    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:56 GMT
	I0318 12:43:56.168035    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:56.661868    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:56.661868    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:56.661868    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:56.662125    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:56.667732    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:43:56.667732    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:56.668126    5712 round_trippers.go:580]     Audit-Id: 4e5d7050-714f-4156-bc75-88983ab263d7
	I0318 12:43:56.668126    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:56.668126    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:56.668126    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:56.668126    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:56.668126    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:56 GMT
	I0318 12:43:56.668341    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:57.162039    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:57.162172    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:57.162172    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:57.162172    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:57.166583    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:57.166583    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:57.166583    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:57.166583    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:57.166583    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:57 GMT
	I0318 12:43:57.166583    5712 round_trippers.go:580]     Audit-Id: b172b030-90bb-40b0-87a7-48a884d4e9cf
	I0318 12:43:57.166583    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:57.166583    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:57.166583    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:57.660828    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:57.660828    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:57.660828    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:57.660828    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:57.665769    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:57.666027    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:57.666027    5712 round_trippers.go:580]     Audit-Id: 6efc7aba-bd41-4b27-ac82-a0d753210de8
	I0318 12:43:57.666027    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:57.666027    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:57.666027    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:57.666027    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:57.666027    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:57 GMT
	I0318 12:43:57.666294    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:57.666928    5712 node_ready.go:53] node "multinode-642600" has status "Ready":"False"
	I0318 12:43:58.161421    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:58.161483    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:58.161483    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:58.161483    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:58.166228    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:58.166228    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:58.166228    5712 round_trippers.go:580]     Audit-Id: d9328c30-d055-4918-8a16-a23e32ed32b8
	I0318 12:43:58.166228    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:58.166329    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:58.166329    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:58.166329    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:58.166329    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:58 GMT
	I0318 12:43:58.166721    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:58.660815    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:58.660815    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:58.660815    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:58.660815    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:58.665000    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:43:58.665000    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:58.665000    5712 round_trippers.go:580]     Audit-Id: 4eecb8b2-8566-4a52-a80c-71268a4e990e
	I0318 12:43:58.665000    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:58.665000    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:58.665000    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:58.665000    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:58.665000    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:58 GMT
	I0318 12:43:58.665797    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:59.162912    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:59.163213    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:59.163213    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:59.163358    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:59.168861    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:43:59.168861    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:59.168861    5712 round_trippers.go:580]     Audit-Id: ec9cd06a-239a-4939-864c-7e973a193e42
	I0318 12:43:59.168861    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:59.168861    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:59.168861    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:59.168861    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:59.168861    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:59 GMT
	I0318 12:43:59.169544    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:59.665059    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:43:59.665129    5712 round_trippers.go:469] Request Headers:
	I0318 12:43:59.665129    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:59.665129    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:43:59.668877    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:43:59.668877    5712 round_trippers.go:577] Response Headers:
	I0318 12:43:59.668877    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:43:59.669308    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:43:59.669308    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:43:59.669308    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:43:59 GMT
	I0318 12:43:59.669308    5712 round_trippers.go:580]     Audit-Id: 3a78e90a-cda6-4dd6-9323-2386ea76d45d
	I0318 12:43:59.669308    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:43:59.669873    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:43:59.670470    5712 node_ready.go:53] node "multinode-642600" has status "Ready":"False"
	I0318 12:44:00.164173    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:00.164173    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:00.164173    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:00.164457    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:00.168667    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:00.168667    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:00.169177    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:00.169177    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:00 GMT
	I0318 12:44:00.169177    5712 round_trippers.go:580]     Audit-Id: 099c4fb6-e1ce-4e00-a686-367502e4bbfa
	I0318 12:44:00.169177    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:00.169177    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:00.169252    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:00.169312    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:00.660552    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:00.660552    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:00.660837    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:00.660837    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:00.664713    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:00.664915    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:00.664915    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:00.664915    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:00 GMT
	I0318 12:44:00.664915    5712 round_trippers.go:580]     Audit-Id: b12cc792-b1d4-4147-a4e3-2b277c542231
	I0318 12:44:00.664915    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:00.664915    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:00.664915    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:00.665155    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:01.160414    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:01.160414    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:01.160414    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:01.160414    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:01.164971    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:01.164971    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:01.164971    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:01.165421    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:01.165421    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:01.165421    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:01 GMT
	I0318 12:44:01.165421    5712 round_trippers.go:580]     Audit-Id: 10cec0ac-c421-4a5f-ba97-35305c398bac
	I0318 12:44:01.165421    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:01.165937    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:01.658802    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:01.658802    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:01.658878    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:01.658878    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:01.664780    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:44:01.665564    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:01.665564    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:01.665564    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:01.665564    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:01.665564    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:01.665564    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:01 GMT
	I0318 12:44:01.665564    5712 round_trippers.go:580]     Audit-Id: 425ee5d9-7b7b-4dc0-b098-ba6c41e0c45a
	I0318 12:44:01.665810    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:02.158548    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:02.158637    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:02.158637    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:02.158637    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:02.162950    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:02.162950    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:02.162950    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:02.162950    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:02 GMT
	I0318 12:44:02.162950    5712 round_trippers.go:580]     Audit-Id: d80f6055-9b8e-4f6b-b7f4-7d804fbd67c9
	I0318 12:44:02.162950    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:02.162950    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:02.162950    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:02.162950    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:02.163939    5712 node_ready.go:53] node "multinode-642600" has status "Ready":"False"
	I0318 12:44:02.673079    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:02.673079    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:02.673079    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:02.673435    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:02.682886    5712 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0318 12:44:02.682886    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:02.682886    5712 round_trippers.go:580]     Audit-Id: 4f4d2e5b-1ac4-44c5-b670-1d184c2aaca2
	I0318 12:44:02.682886    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:02.682886    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:02.682886    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:02.682886    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:02.682886    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:02 GMT
	I0318 12:44:02.682886    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:03.161138    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:03.161138    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:03.161138    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:03.161138    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:03.167501    5712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:44:03.167501    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:03.167501    5712 round_trippers.go:580]     Audit-Id: 1dd95ea9-a79a-4ea3-b6f5-00f527b1a9cf
	I0318 12:44:03.167501    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:03.167501    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:03.167501    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:03.167501    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:03.167501    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:03 GMT
	I0318 12:44:03.169299    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:03.661075    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:03.661294    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:03.661294    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:03.661294    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:03.666402    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:44:03.666402    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:03.666402    5712 round_trippers.go:580]     Audit-Id: 8e5f19dd-f7ca-4733-acb1-e2a5b1cbd95c
	I0318 12:44:03.666402    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:03.666402    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:03.666402    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:03.666402    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:03.666402    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:03 GMT
	I0318 12:44:03.666402    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:04.165006    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:04.165100    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:04.165100    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:04.165100    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:04.195897    5712 round_trippers.go:574] Response Status: 200 OK in 30 milliseconds
	I0318 12:44:04.196123    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:04.196123    5712 round_trippers.go:580]     Audit-Id: e80f46d4-1594-44cb-bb65-da3ae6b924e9
	I0318 12:44:04.196123    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:04.196123    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:04.196123    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:04.196123    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:04.196123    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:04 GMT
	I0318 12:44:04.196515    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:04.197042    5712 node_ready.go:53] node "multinode-642600" has status "Ready":"False"
	I0318 12:44:04.667621    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:04.667621    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:04.667621    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:04.667621    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:04.672348    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:04.672348    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:04.672348    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:04.672348    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:04 GMT
	I0318 12:44:04.672348    5712 round_trippers.go:580]     Audit-Id: 085eb688-0979-446e-a296-57e2b838aa06
	I0318 12:44:04.672348    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:04.672348    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:04.672348    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:04.673077    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:05.168046    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:05.168046    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:05.168046    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:05.168046    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:05.173150    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:44:05.173206    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:05.173206    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:05.173206    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:05.173206    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:05.173206    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:05.173206    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:05 GMT
	I0318 12:44:05.173206    5712 round_trippers.go:580]     Audit-Id: d3eb1728-48b9-422b-885d-7f724e20c866
	I0318 12:44:05.173206    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:05.666758    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:05.666758    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:05.667033    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:05.667033    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:05.670801    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:05.671522    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:05.671522    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:05.671522    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:05.671522    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:05 GMT
	I0318 12:44:05.671522    5712 round_trippers.go:580]     Audit-Id: 24d35937-1634-4588-b834-3ce3d677167e
	I0318 12:44:05.671522    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:05.671522    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:05.671786    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:06.167467    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:06.167467    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:06.167467    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:06.167612    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:06.172084    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:06.172337    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:06.172382    5712 round_trippers.go:580]     Audit-Id: 6f366b87-644e-426b-a3d3-b690c46d6eec
	I0318 12:44:06.172382    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:06.172382    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:06.172382    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:06.172382    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:06.172382    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:06 GMT
	I0318 12:44:06.172882    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:06.666191    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:06.666267    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:06.666267    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:06.666267    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:06.670638    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:06.670943    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:06.670943    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:06.671143    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:06.671143    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:06.671143    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:06.671143    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:06 GMT
	I0318 12:44:06.671143    5712 round_trippers.go:580]     Audit-Id: bc13a46e-e1d5-4d0d-b74a-f988d9488f28
	I0318 12:44:06.671439    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:06.671935    5712 node_ready.go:53] node "multinode-642600" has status "Ready":"False"
	I0318 12:44:07.166156    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:07.166428    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:07.166497    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:07.166497    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:07.173264    5712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:44:07.173264    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:07.173721    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:07 GMT
	I0318 12:44:07.173721    5712 round_trippers.go:580]     Audit-Id: 1fa2bf9e-8a6c-42c1-b181-73897d659493
	I0318 12:44:07.173721    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:07.173721    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:07.173721    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:07.173721    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:07.173721    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:07.662739    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:07.662739    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:07.662739    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:07.662739    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:07.666309    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:07.667103    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:07.667167    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:07 GMT
	I0318 12:44:07.667167    5712 round_trippers.go:580]     Audit-Id: 898260f1-de67-4354-b2bd-af0296c415a9
	I0318 12:44:07.667167    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:07.667167    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:07.667167    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:07.667167    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:07.667422    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:08.166089    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:08.166089    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:08.166089    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:08.166164    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:08.169961    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:08.170950    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:08.170950    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:08.170950    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:08.170950    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:08.170950    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:08.170950    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:08 GMT
	I0318 12:44:08.170950    5712 round_trippers.go:580]     Audit-Id: ff7854a6-d09d-41b0-a2da-3b6605a1b799
	I0318 12:44:08.171448    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:08.665335    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:08.665395    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:08.665395    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:08.665395    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:08.669961    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:08.669961    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:08.669961    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:08.670372    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:08.670372    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:08.670372    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:08 GMT
	I0318 12:44:08.670372    5712 round_trippers.go:580]     Audit-Id: 7d283d31-8c0a-4899-8cb6-2b7b6aebaf62
	I0318 12:44:08.670372    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:08.670546    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:09.165579    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:09.165579    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:09.165579    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:09.165579    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:09.169957    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:09.170772    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:09.170772    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:09 GMT
	I0318 12:44:09.170772    5712 round_trippers.go:580]     Audit-Id: d39956e3-7811-4048-b0f0-a203dab89f57
	I0318 12:44:09.170772    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:09.170772    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:09.170772    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:09.170772    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:09.170772    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:09.171686    5712 node_ready.go:53] node "multinode-642600" has status "Ready":"False"
	I0318 12:44:09.668898    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:09.668985    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:09.668985    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:09.668985    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:09.673241    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:09.673241    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:09.673241    5712 round_trippers.go:580]     Audit-Id: 7bd9a8f7-f224-4080-80dc-c3deb3f3adab
	I0318 12:44:09.673241    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:09.673241    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:09.673241    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:09.673241    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:09.673241    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:09 GMT
	I0318 12:44:09.674130    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:10.168195    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:10.168195    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:10.168292    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:10.168292    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:10.171597    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:10.171905    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:10.171905    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:10.171905    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:10.171905    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:10.171905    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:10.171905    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:10 GMT
	I0318 12:44:10.171905    5712 round_trippers.go:580]     Audit-Id: dd6fedc9-4dca-4008-ac70-cf0695ab5a03
	I0318 12:44:10.172347    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:10.670758    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:10.670955    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:10.670955    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:10.670955    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:10.676699    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:44:10.677122    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:10.677122    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:10.677122    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:10.677122    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:10.677122    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:10.677193    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:10 GMT
	I0318 12:44:10.677193    5712 round_trippers.go:580]     Audit-Id: 475fd1f4-0476-482d-9b0a-474c4f9acc5e
	I0318 12:44:10.677907    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:11.171902    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:11.171902    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:11.171994    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:11.171994    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:11.198871    5712 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0318 12:44:11.199048    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:11.199111    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:11.199111    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:11.199111    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:11.199111    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:11 GMT
	I0318 12:44:11.199111    5712 round_trippers.go:580]     Audit-Id: 826ebd82-eecd-4302-b789-b813d1d18b66
	I0318 12:44:11.199111    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:11.199111    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"1970","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 12:44:11.200096    5712 node_ready.go:53] node "multinode-642600" has status "Ready":"False"
	I0318 12:44:11.662448    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:11.662448    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:11.662503    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:11.662503    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:11.666914    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:11.666944    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:11.667021    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:11 GMT
	I0318 12:44:11.667021    5712 round_trippers.go:580]     Audit-Id: 0768de30-4e2f-4267-9394-814a7468bc6f
	I0318 12:44:11.667021    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:11.667021    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:11.667021    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:11.667021    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:11.667021    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2012","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0318 12:44:11.667611    5712 node_ready.go:49] node "multinode-642600" has status "Ready":"True"
	I0318 12:44:11.667611    5712 node_ready.go:38] duration metric: took 35.0099667s for node "multinode-642600" to be "Ready" ...
	I0318 12:44:11.667611    5712 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 12:44:11.667611    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods
	I0318 12:44:11.667611    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:11.667611    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:11.667611    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:11.673018    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:44:11.673863    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:11.673863    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:11.673863    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:11.673863    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:11.673863    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:11 GMT
	I0318 12:44:11.673863    5712 round_trippers.go:580]     Audit-Id: 51660724-8c83-40d6-941f-d102519e2f70
	I0318 12:44:11.673863    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:11.675246    5712 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2013"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83076 chars]
	I0318 12:44:11.679124    5712 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fgn7v" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:11.679247    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:11.679247    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:11.679357    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:11.679357    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:11.681576    5712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:11.681576    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:11.681576    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:11.681576    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:11.681576    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:11.681576    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:11.681576    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:11 GMT
	I0318 12:44:11.682568    5712 round_trippers.go:580]     Audit-Id: 68692235-b9a4-41aa-b52b-d932b2baf2b3
	I0318 12:44:11.682820    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:11.683623    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:11.683677    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:11.683677    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:11.683677    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:11.687249    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:11.687249    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:11.687249    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:11.687249    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:11.687249    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:11.687249    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:11 GMT
	I0318 12:44:11.687249    5712 round_trippers.go:580]     Audit-Id: 87ff9236-27d7-45d4-a6ba-5c07a01fd91b
	I0318 12:44:11.687249    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:11.687249    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2012","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0318 12:44:12.189290    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:12.189290    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:12.189290    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:12.189290    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:12.196764    5712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 12:44:12.196764    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:12.196764    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:12.196764    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:12.196764    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:12.196764    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:12 GMT
	I0318 12:44:12.196764    5712 round_trippers.go:580]     Audit-Id: 28ad4020-5575-4e1b-ac1d-227b725cbc4c
	I0318 12:44:12.196764    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:12.197406    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:12.198214    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:12.198214    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:12.198214    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:12.198214    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:12.201458    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:12.201528    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:12.201528    5712 round_trippers.go:580]     Audit-Id: 23ef2e71-f422-43a3-b48e-67f0ccafb647
	I0318 12:44:12.201528    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:12.201528    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:12.201528    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:12.201528    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:12.201652    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:12 GMT
	I0318 12:44:12.202389    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2012","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0318 12:44:12.689821    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:12.689895    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:12.689895    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:12.689895    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:12.695086    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:44:12.695086    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:12.695086    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:12 GMT
	I0318 12:44:12.695086    5712 round_trippers.go:580]     Audit-Id: 3909224e-955b-4e27-9bcb-fcd165c794b4
	I0318 12:44:12.695086    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:12.695086    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:12.695086    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:12.695086    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:12.695424    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:12.695616    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:12.695616    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:12.695616    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:12.695616    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:12.699509    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:12.699509    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:12.699509    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:12 GMT
	I0318 12:44:12.699509    5712 round_trippers.go:580]     Audit-Id: 4fba9b28-f2f3-45c7-b234-4ee90fa733f7
	I0318 12:44:12.699509    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:12.699509    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:12.699509    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:12.699509    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:12.699509    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2012","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0318 12:44:13.190512    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:13.190512    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:13.190512    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:13.190512    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:13.195114    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:13.195218    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:13.195218    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:13.195218    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:13.195218    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:13 GMT
	I0318 12:44:13.195218    5712 round_trippers.go:580]     Audit-Id: 4e79249d-da04-4313-93a9-3354b854e0b8
	I0318 12:44:13.195218    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:13.195218    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:13.195218    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:13.195847    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:13.195847    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:13.195847    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:13.195847    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:13.199600    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:13.200093    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:13.200093    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:13 GMT
	I0318 12:44:13.200093    5712 round_trippers.go:580]     Audit-Id: 2bbe5cdf-efff-4dc6-a479-6d79cddb9e6d
	I0318 12:44:13.200093    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:13.200093    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:13.200093    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:13.200093    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:13.200961    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2012","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0318 12:44:13.689055    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:13.689055    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:13.689055    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:13.689055    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:13.696084    5712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 12:44:13.696084    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:13.696084    5712 round_trippers.go:580]     Audit-Id: f2a50618-4423-4ac6-aec3-94fedde79059
	I0318 12:44:13.696084    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:13.696084    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:13.696084    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:13.696084    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:13.696084    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:13 GMT
	I0318 12:44:13.696084    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:13.697207    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:13.697207    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:13.697207    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:13.697207    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:13.701058    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:13.701204    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:13.701204    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:13.701204    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:13.701204    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:13.701204    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:13 GMT
	I0318 12:44:13.701204    5712 round_trippers.go:580]     Audit-Id: fc37cca9-bcba-45a3-83b3-c52fd084afd9
	I0318 12:44:13.701204    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:13.701513    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:13.702009    5712 pod_ready.go:102] pod "coredns-5dd5756b68-fgn7v" in "kube-system" namespace has status "Ready":"False"
	I0318 12:44:14.187179    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:14.189894    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:14.190057    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:14.190057    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:14.193266    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:14.193960    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:14.193960    5712 round_trippers.go:580]     Audit-Id: a7cc1a20-6fe3-4d80-9ab2-8fad4d555963
	I0318 12:44:14.193960    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:14.193960    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:14.193960    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:14.193960    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:14.193960    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:14 GMT
	I0318 12:44:14.194292    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:14.195492    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:14.195492    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:14.195582    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:14.195582    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:14.198859    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:14.199018    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:14.199018    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:14 GMT
	I0318 12:44:14.199018    5712 round_trippers.go:580]     Audit-Id: 4f4d0bf8-3015-400a-a8bb-4e7a9bd8ab66
	I0318 12:44:14.199018    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:14.199018    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:14.199018    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:14.199018    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:14.199387    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:14.686852    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:14.686852    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:14.686852    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:14.686852    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:14.692315    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:44:14.692315    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:14.692315    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:14.692315    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:14.692315    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:14.692315    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:14 GMT
	I0318 12:44:14.692315    5712 round_trippers.go:580]     Audit-Id: cff36635-0841-44ed-9c51-8f1b0b3c60f0
	I0318 12:44:14.692315    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:14.692315    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:14.693313    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:14.693394    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:14.693394    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:14.693394    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:14.696634    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:14.696634    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:14.697142    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:14.697142    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:14.697142    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:14.697142    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:14 GMT
	I0318 12:44:14.697142    5712 round_trippers.go:580]     Audit-Id: e6729c94-2e8e-424c-94ad-7ef4ff4b4226
	I0318 12:44:14.697142    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:14.697648    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:15.188309    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:15.188309    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:15.188309    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:15.188309    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:15.192917    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:15.192917    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:15.192917    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:15.192917    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:15.192917    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:15.192917    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:15 GMT
	I0318 12:44:15.192917    5712 round_trippers.go:580]     Audit-Id: 3285ed7b-7ced-468f-a841-9faa545a27cc
	I0318 12:44:15.192917    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:15.194509    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:15.194967    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:15.194967    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:15.194967    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:15.194967    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:15.201267    5712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:44:15.201267    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:15.201267    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:15.201267    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:15 GMT
	I0318 12:44:15.201267    5712 round_trippers.go:580]     Audit-Id: 47d7aa2f-b56d-419e-8a32-c595a9d8290b
	I0318 12:44:15.201267    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:15.201267    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:15.201267    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:15.201996    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:15.688423    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:15.688485    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:15.688485    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:15.688485    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:15.692666    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:15.692666    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:15.692666    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:15.692666    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:15 GMT
	I0318 12:44:15.692666    5712 round_trippers.go:580]     Audit-Id: ecf79032-3aa7-4bc6-82d9-2ab8eb39336d
	I0318 12:44:15.692666    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:15.692666    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:15.692666    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:15.692666    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:15.693763    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:15.693763    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:15.693763    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:15.693763    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:15.697739    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:15.697739    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:15.697739    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:15.697739    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:15 GMT
	I0318 12:44:15.697739    5712 round_trippers.go:580]     Audit-Id: a0300dd2-98c2-4a28-9836-b8a0594f4b97
	I0318 12:44:15.697739    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:15.697739    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:15.697739    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:15.698346    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:16.189806    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:16.190125    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:16.190125    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:16.190125    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:16.194747    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:16.195302    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:16.195302    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:16 GMT
	I0318 12:44:16.195302    5712 round_trippers.go:580]     Audit-Id: 05ea2c45-deb2-4d18-be4e-93f7ba227959
	I0318 12:44:16.195399    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:16.195399    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:16.195438    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:16.195438    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:16.195670    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:16.196293    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:16.196293    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:16.196293    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:16.196293    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:16.199853    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:16.200850    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:16.200877    5712 round_trippers.go:580]     Audit-Id: 41764b48-4925-4b1b-a501-f26dd948e26b
	I0318 12:44:16.200877    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:16.200877    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:16.200877    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:16.200877    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:16.200877    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:16 GMT
	I0318 12:44:16.200930    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:16.201669    5712 pod_ready.go:102] pod "coredns-5dd5756b68-fgn7v" in "kube-system" namespace has status "Ready":"False"
	I0318 12:44:16.679732    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:16.679839    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:16.679839    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:16.679839    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:16.683217    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:16.683217    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:16.683217    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:16.683217    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:16.683217    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:16.683217    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:16.684224    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:16 GMT
	I0318 12:44:16.684224    5712 round_trippers.go:580]     Audit-Id: 6608c079-a2d7-43cd-ad67-050578afc1c7
	I0318 12:44:16.684505    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:16.685391    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:16.685391    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:16.685391    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:16.685391    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:16.688745    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:16.688745    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:16.688745    5712 round_trippers.go:580]     Audit-Id: a2f0aa03-092b-4358-9b32-baa0817a5b93
	I0318 12:44:16.688745    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:16.688745    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:16.688745    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:16.688745    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:16.688745    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:16 GMT
	I0318 12:44:16.689438    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:17.194490    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:17.194490    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:17.194490    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:17.194490    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:17.199181    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:17.199181    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:17.199181    5712 round_trippers.go:580]     Audit-Id: 05ee3b1e-0451-4971-9b1a-020bcd3ac56a
	I0318 12:44:17.199181    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:17.199181    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:17.199181    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:17.199181    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:17.199181    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:17 GMT
	I0318 12:44:17.199633    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:17.200002    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:17.200584    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:17.200584    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:17.200584    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:17.203962    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:17.203962    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:17.203962    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:17.203962    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:17.203962    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:17.203962    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:17.203962    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:17 GMT
	I0318 12:44:17.203962    5712 round_trippers.go:580]     Audit-Id: bb5ac872-8323-4a4d-a0ec-a35b06d5db0c
	I0318 12:44:17.204590    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:17.682732    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:17.682732    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:17.682732    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:17.682732    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:17.686405    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:17.686888    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:17.686948    5712 round_trippers.go:580]     Audit-Id: 50db086d-a553-4435-81c2-2b0d2e1ec308
	I0318 12:44:17.686948    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:17.686948    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:17.686948    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:17.687007    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:17.687042    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:17 GMT
	I0318 12:44:17.687076    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:17.688025    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:17.688077    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:17.688077    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:17.688105    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:17.690996    5712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:17.690996    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:17.690996    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:17.690996    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:17 GMT
	I0318 12:44:17.690996    5712 round_trippers.go:580]     Audit-Id: ed2b67b5-49dc-4fc3-a73a-9504062879ea
	I0318 12:44:17.691876    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:17.691938    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:17.691938    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:17.692532    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:18.186906    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:18.186983    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:18.186983    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:18.186983    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:18.191508    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:18.191710    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:18.191710    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:18 GMT
	I0318 12:44:18.191710    5712 round_trippers.go:580]     Audit-Id: fed979db-c6cc-4e5b-9d3b-057dbcd455a7
	I0318 12:44:18.191710    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:18.191710    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:18.191810    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:18.191810    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:18.192038    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:18.192303    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:18.192845    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:18.192845    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:18.192845    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:18.196147    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:18.196147    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:18.196147    5712 round_trippers.go:580]     Audit-Id: ddf6c14c-7522-42b4-83f3-b6b9b9732224
	I0318 12:44:18.196147    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:18.196147    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:18.196147    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:18.196147    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:18.196147    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:18 GMT
	I0318 12:44:18.196447    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:18.687290    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:18.687402    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:18.687402    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:18.687402    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:18.691698    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:18.692219    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:18.692219    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:18.692219    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:18.692219    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:18.692219    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:18 GMT
	I0318 12:44:18.692219    5712 round_trippers.go:580]     Audit-Id: 357060ba-d2e7-4223-9fa8-c16bba65f32a
	I0318 12:44:18.692219    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:18.692512    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:18.693183    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:18.693183    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:18.693183    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:18.693277    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:18.696420    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:18.696420    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:18.696420    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:18.696420    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:18.696420    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:18.696420    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:18 GMT
	I0318 12:44:18.696420    5712 round_trippers.go:580]     Audit-Id: eced3421-3a7f-44af-97cf-2857f3c4baa5
	I0318 12:44:18.696420    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:18.697337    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:18.697779    5712 pod_ready.go:102] pod "coredns-5dd5756b68-fgn7v" in "kube-system" namespace has status "Ready":"False"
	I0318 12:44:19.184814    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:19.187660    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:19.187660    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:19.187660    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:19.191157    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:19.191157    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:19.191157    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:19.191157    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:19.191157    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:19.192206    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:19 GMT
	I0318 12:44:19.192206    5712 round_trippers.go:580]     Audit-Id: d4ed1ae8-9361-4227-9ae3-5b041ed49910
	I0318 12:44:19.192206    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:19.192456    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:19.193304    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:19.193304    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:19.193304    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:19.193304    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:19.196093    5712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:19.196093    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:19.196093    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:19.196093    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:19.196093    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:19.196093    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:19 GMT
	I0318 12:44:19.196594    5712 round_trippers.go:580]     Audit-Id: 314e0cd0-9003-4a0b-9ff8-150072c7c15c
	I0318 12:44:19.196594    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:19.196826    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:19.684415    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:19.684415    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:19.684415    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:19.684415    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:19.689310    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:19.689310    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:19.689310    5712 round_trippers.go:580]     Audit-Id: d9991c05-4c4e-42ab-8450-d123f48c2fde
	I0318 12:44:19.689310    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:19.689310    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:19.689310    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:19.689310    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:19.689310    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:19 GMT
	I0318 12:44:19.689310    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:19.690473    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:19.690473    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:19.690473    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:19.690473    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:19.696541    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:44:19.696607    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:19.696607    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:19.696607    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:19.696607    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:19.696607    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:19 GMT
	I0318 12:44:19.696607    5712 round_trippers.go:580]     Audit-Id: 7166bc9d-fda4-460c-b473-b52bf353b794
	I0318 12:44:19.696607    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:19.696607    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:20.184737    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:20.184737    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:20.184737    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:20.184855    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:20.189772    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:20.189772    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:20.189772    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:20.189772    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:20.189772    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:20 GMT
	I0318 12:44:20.189772    5712 round_trippers.go:580]     Audit-Id: 05ee64e2-c995-4ea2-94d3-d68d214e36bd
	I0318 12:44:20.189772    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:20.189772    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:20.190405    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:20.191046    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:20.191046    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:20.191046    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:20.191046    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:20.193639    5712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:20.193639    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:20.193639    5712 round_trippers.go:580]     Audit-Id: e1a9c45c-df43-4395-aaa9-80f01d696884
	I0318 12:44:20.193639    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:20.194626    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:20.194626    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:20.194626    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:20.194626    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:20 GMT
	I0318 12:44:20.195054    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:20.685980    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:20.686195    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:20.686195    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:20.686195    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:20.690175    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:20.690175    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:20.690175    5712 round_trippers.go:580]     Audit-Id: 944456e4-665e-4ca2-b14c-6c08b49da556
	I0318 12:44:20.690175    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:20.690258    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:20.690258    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:20.690258    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:20.690258    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:20 GMT
	I0318 12:44:20.690493    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:20.691284    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:20.691284    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:20.691284    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:20.691284    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:20.697025    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:44:20.697025    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:20.697025    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:20 GMT
	I0318 12:44:20.697025    5712 round_trippers.go:580]     Audit-Id: 8e9daf81-187e-42ba-9516-554a19108696
	I0318 12:44:20.697025    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:20.697025    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:20.697025    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:20.697025    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:20.697880    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:20.697906    5712 pod_ready.go:102] pod "coredns-5dd5756b68-fgn7v" in "kube-system" namespace has status "Ready":"False"
	I0318 12:44:21.186002    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:21.186002    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:21.186002    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:21.186002    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:21.190030    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:21.190030    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:21.190030    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:21.190030    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:21.190030    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:21.190030    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:21 GMT
	I0318 12:44:21.190030    5712 round_trippers.go:580]     Audit-Id: 296a70f8-cd1d-4354-9ff9-3472574721e8
	I0318 12:44:21.190030    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:21.190030    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:21.191000    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:21.191000    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:21.191000    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:21.191000    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:21.194008    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:21.194008    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:21.194008    5712 round_trippers.go:580]     Audit-Id: ccf25fed-8468-4983-8188-e5585640997e
	I0318 12:44:21.194008    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:21.194008    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:21.194008    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:21.194008    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:21.194008    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:21 GMT
	I0318 12:44:21.194994    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:21.692322    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:21.692420    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:21.692420    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:21.692420    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:21.697256    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:21.697325    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:21.697325    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:21.697325    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:21.697325    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:21 GMT
	I0318 12:44:21.697325    5712 round_trippers.go:580]     Audit-Id: fa195ac6-3ca5-49be-aeed-5b87e2bed243
	I0318 12:44:21.697325    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:21.697325    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:21.697650    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:21.698534    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:21.698534    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:21.698534    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:21.698534    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:21.702603    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:21.702708    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:21.702708    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:21.702708    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:21.702708    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:21.702708    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:21.702708    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:21 GMT
	I0318 12:44:21.702708    5712 round_trippers.go:580]     Audit-Id: 5bea0762-0250-41b5-ac29-f243420a227e
	I0318 12:44:21.702919    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:22.190369    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:22.190369    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:22.190369    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:22.190369    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:22.195258    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:22.195258    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:22.195258    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:22 GMT
	I0318 12:44:22.195258    5712 round_trippers.go:580]     Audit-Id: 68c04b8c-0662-45cb-8226-2b8fa49c7a37
	I0318 12:44:22.195258    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:22.195258    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:22.195479    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:22.195479    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:22.195564    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:22.196754    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:22.196754    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:22.196754    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:22.196754    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:22.199867    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:22.200351    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:22.200351    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:22.200351    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:22.200351    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:22 GMT
	I0318 12:44:22.200351    5712 round_trippers.go:580]     Audit-Id: f0cd9ef8-d1e9-4635-8d0e-39ee443ce231
	I0318 12:44:22.200351    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:22.200351    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:22.200932    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:22.689351    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:22.689527    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:22.689527    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:22.689527    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:22.694615    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:44:22.695693    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:22.695693    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:22.695693    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:22.695737    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:22 GMT
	I0318 12:44:22.695737    5712 round_trippers.go:580]     Audit-Id: 51c77479-7ef8-4549-bdcf-29cdea895d25
	I0318 12:44:22.695737    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:22.695737    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:22.695737    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:22.696791    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:22.696791    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:22.696791    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:22.696791    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:22.699978    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:22.700875    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:22.700875    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:22 GMT
	I0318 12:44:22.700875    5712 round_trippers.go:580]     Audit-Id: 8656091a-bace-4ade-968e-ed7071a2bb7f
	I0318 12:44:22.700875    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:22.700925    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:22.700925    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:22.700925    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:22.701006    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:22.701617    5712 pod_ready.go:102] pod "coredns-5dd5756b68-fgn7v" in "kube-system" namespace has status "Ready":"False"
	I0318 12:44:23.191048    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:23.191048    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:23.191048    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:23.191048    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:23.195874    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:23.195874    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:23.195874    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:23.195874    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:23.195874    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:23.195874    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:23.195874    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:23 GMT
	I0318 12:44:23.195874    5712 round_trippers.go:580]     Audit-Id: ca521754-fc97-47a7-9195-3cd38de7268b
	I0318 12:44:23.195874    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:23.196943    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:23.196943    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:23.196943    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:23.196943    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:23.203500    5712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:44:23.203500    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:23.203500    5712 round_trippers.go:580]     Audit-Id: a1eee80f-b7c3-49f8-b7ce-0e095d26368d
	I0318 12:44:23.203500    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:23.203500    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:23.203500    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:23.203500    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:23.203500    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:23 GMT
	I0318 12:44:23.203500    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:23.681225    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:23.681225    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:23.681329    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:23.681329    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:23.683987    5712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:23.684629    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:23.684629    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:23.684629    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:23 GMT
	I0318 12:44:23.684629    5712 round_trippers.go:580]     Audit-Id: 8c0646e4-5030-4a93-8604-cf8c53d9b492
	I0318 12:44:23.684629    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:23.684629    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:23.684629    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:23.684949    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:23.685800    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:23.685800    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:23.685800    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:23.685800    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:23.689144    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:23.689144    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:23.689144    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:23.689144    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:23.689144    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:23.689144    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:23 GMT
	I0318 12:44:23.689144    5712 round_trippers.go:580]     Audit-Id: d97ae3b5-349d-4383-b52f-89febb47de97
	I0318 12:44:23.689144    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:23.689683    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:24.183512    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:24.187181    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:24.187181    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:24.187181    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:24.194438    5712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 12:44:24.194438    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:24.194438    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:24 GMT
	I0318 12:44:24.194438    5712 round_trippers.go:580]     Audit-Id: 365ca239-6035-4726-bb21-40a23ef0f551
	I0318 12:44:24.194438    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:24.194438    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:24.194438    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:24.194438    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:24.195105    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:24.195891    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:24.195891    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:24.195891    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:24.195891    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:24.198527    5712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:24.198527    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:24.198527    5712 round_trippers.go:580]     Audit-Id: 75c2da6b-08aa-49fc-adc0-619837128160
	I0318 12:44:24.198527    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:24.198527    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:24.198527    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:24.198527    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:24.198527    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:24 GMT
	I0318 12:44:24.199345    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:24.687835    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:24.687835    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:24.687835    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:24.687835    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:24.692441    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:24.692922    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:24.692922    5712 round_trippers.go:580]     Audit-Id: 7a0307ff-7b11-411e-a738-e35606bccc8f
	I0318 12:44:24.692922    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:24.692922    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:24.692922    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:24.692922    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:24.692922    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:24 GMT
	I0318 12:44:24.693236    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:24.694028    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:24.694082    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:24.694082    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:24.694082    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:24.697874    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:24.697874    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:24.697874    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:24 GMT
	I0318 12:44:24.697874    5712 round_trippers.go:580]     Audit-Id: 1de05b38-ea53-4391-954f-0c681c3d1b89
	I0318 12:44:24.697874    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:24.697874    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:24.697874    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:24.697874    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:24.697874    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:25.185296    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:25.185596    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:25.185596    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:25.185670    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:25.189231    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:25.189231    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:25.189231    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:25.189231    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:25 GMT
	I0318 12:44:25.189231    5712 round_trippers.go:580]     Audit-Id: 0770fdd1-d8a4-42be-aa42-264d5f9306d2
	I0318 12:44:25.189231    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:25.189231    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:25.189231    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:25.190314    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:25.191217    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:25.191269    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:25.191269    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:25.191269    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:25.194875    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:25.194875    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:25.194875    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:25 GMT
	I0318 12:44:25.194992    5712 round_trippers.go:580]     Audit-Id: e94c15c0-f098-4d72-bc69-d6fa71412fd0
	I0318 12:44:25.194992    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:25.194992    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:25.194992    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:25.194992    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:25.195053    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:25.195788    5712 pod_ready.go:102] pod "coredns-5dd5756b68-fgn7v" in "kube-system" namespace has status "Ready":"False"
	I0318 12:44:25.687613    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:25.687690    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:25.687690    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:25.687690    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:25.691052    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:25.692017    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:25.692085    5712 round_trippers.go:580]     Audit-Id: 5770b70f-5661-4482-aea1-14759485817d
	I0318 12:44:25.692085    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:25.692085    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:25.692085    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:25.692085    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:25.692085    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:25 GMT
	I0318 12:44:25.692085    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:25.692995    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:25.693067    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:25.693067    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:25.693067    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:25.696293    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:25.696293    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:25.696573    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:25.696633    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:25 GMT
	I0318 12:44:25.696633    5712 round_trippers.go:580]     Audit-Id: 6c69f8a8-44ea-48a3-98a3-425f3910b939
	I0318 12:44:25.696633    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:25.696633    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:25.696633    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:25.697314    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:26.186485    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:26.186560    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:26.186560    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:26.186560    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:26.189912    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:26.190966    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:26.190966    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:26.190966    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:26.190966    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:26 GMT
	I0318 12:44:26.190966    5712 round_trippers.go:580]     Audit-Id: ba95d9cb-04a3-4e2b-b6ea-4d0392e390cd
	I0318 12:44:26.190966    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:26.190966    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:26.191316    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:26.192037    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:26.192037    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:26.192037    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:26.192037    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:26.194649    5712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:26.194649    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:26.194649    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:26.194649    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:26.194649    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:26.194649    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:26.194649    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:26 GMT
	I0318 12:44:26.194649    5712 round_trippers.go:580]     Audit-Id: b2e83234-7314-4055-8529-007cc48facc0
	I0318 12:44:26.196345    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:26.688382    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:26.688382    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:26.688382    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:26.688382    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:26.691771    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:26.691771    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:26.691771    5712 round_trippers.go:580]     Audit-Id: 1df16aa0-1c0b-47c7-b307-379de7306aba
	I0318 12:44:26.691771    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:26.692341    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:26.692341    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:26.692341    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:26.692341    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:26 GMT
	I0318 12:44:26.692341    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:26.693311    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:26.693311    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:26.693311    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:26.693311    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:26.697630    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:26.697630    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:26.697814    5712 round_trippers.go:580]     Audit-Id: 8a85b6ef-88f8-4373-97fa-7b45bd3345f0
	I0318 12:44:26.697814    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:26.697814    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:26.697814    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:26.697814    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:26.697814    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:26 GMT
	I0318 12:44:26.698213    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:27.190331    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:27.190331    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:27.190331    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:27.190331    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:27.194805    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:27.195296    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:27.195296    5712 round_trippers.go:580]     Audit-Id: e4b40c8e-375d-470d-ab22-2b071736004f
	I0318 12:44:27.195296    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:27.195296    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:27.195296    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:27.195296    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:27.195296    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:27 GMT
	I0318 12:44:27.195744    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:27.196186    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:27.196186    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:27.196186    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:27.196186    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:27.199968    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:27.199968    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:27.199968    5712 round_trippers.go:580]     Audit-Id: e2b1872b-a14e-4414-a484-ce21db4a68c0
	I0318 12:44:27.199968    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:27.199968    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:27.199968    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:27.199968    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:27.199968    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:27 GMT
	I0318 12:44:27.201362    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:27.201839    5712 pod_ready.go:102] pod "coredns-5dd5756b68-fgn7v" in "kube-system" namespace has status "Ready":"False"
	I0318 12:44:27.688272    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:27.688542    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:27.688542    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:27.688542    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:27.692785    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:27.692837    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:27.692837    5712 round_trippers.go:580]     Audit-Id: 86c94cb3-990f-4e2f-93b0-fd291a1af69e
	I0318 12:44:27.692837    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:27.692837    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:27.692880    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:27.692880    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:27.692880    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:27 GMT
	I0318 12:44:27.693596    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:27.693745    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:27.694322    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:27.694322    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:27.694322    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:27.698129    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:27.698129    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:27.698556    5712 round_trippers.go:580]     Audit-Id: 374e4ac8-2526-4f20-a555-bdbcc8c5cc52
	I0318 12:44:27.698556    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:27.698556    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:27.698556    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:27.698556    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:27.698556    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:27 GMT
	I0318 12:44:27.698556    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:28.189054    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:28.189165    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:28.189165    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:28.189165    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:28.193679    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:28.193679    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:28.193679    5712 round_trippers.go:580]     Audit-Id: f446158c-2311-4125-ab37-1e5aa0f36123
	I0318 12:44:28.193679    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:28.193679    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:28.193852    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:28.193852    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:28.193852    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:28 GMT
	I0318 12:44:28.194247    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:28.195028    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:28.195028    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:28.195102    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:28.195102    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:28.198286    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:28.198286    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:28.198286    5712 round_trippers.go:580]     Audit-Id: 151bd438-a533-4683-9f1b-c1b7d337b5ff
	I0318 12:44:28.198286    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:28.198286    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:28.198996    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:28.198996    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:28.198996    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:28 GMT
	I0318 12:44:28.199379    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:28.690475    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:28.690475    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:28.690475    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:28.690475    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:28.694382    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:28.694382    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:28.695026    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:28 GMT
	I0318 12:44:28.695026    5712 round_trippers.go:580]     Audit-Id: 39426202-2074-49a8-9309-df2b267ebbdd
	I0318 12:44:28.695026    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:28.695026    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:28.695026    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:28.695026    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:28.695203    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:28.696056    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:28.696056    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:28.696108    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:28.696108    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:28.717239    5712 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0318 12:44:28.717239    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:28.717624    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:28 GMT
	I0318 12:44:28.717624    5712 round_trippers.go:580]     Audit-Id: 5f536d96-bee0-44ac-8da4-a1c6465d7479
	I0318 12:44:28.717624    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:28.717624    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:28.717624    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:28.717624    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:28.717947    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:29.191330    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:29.194978    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:29.194978    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:29.194978    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:29.200376    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:44:29.201410    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:29.201410    5712 round_trippers.go:580]     Audit-Id: 33130013-2b8a-4157-ab96-7c712701556c
	I0318 12:44:29.201443    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:29.201443    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:29.201443    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:29.201443    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:29.201443    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:29 GMT
	I0318 12:44:29.201847    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:29.202639    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:29.202704    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:29.202704    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:29.202704    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:29.205630    5712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:29.206029    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:29.206029    5712 round_trippers.go:580]     Audit-Id: 92ce3129-989c-4664-a7af-232965a1f1c1
	I0318 12:44:29.206029    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:29.206029    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:29.206029    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:29.206029    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:29.206029    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:29 GMT
	I0318 12:44:29.206029    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:29.206828    5712 pod_ready.go:102] pod "coredns-5dd5756b68-fgn7v" in "kube-system" namespace has status "Ready":"False"
	I0318 12:44:29.690504    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:29.690504    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:29.690504    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:29.690504    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:29.695038    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:29.695038    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:29.695537    5712 round_trippers.go:580]     Audit-Id: 090178bd-ce42-4dc9-b27b-705727259d7e
	I0318 12:44:29.695537    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:29.695537    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:29.695537    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:29.695537    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:29.695537    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:29 GMT
	I0318 12:44:29.696280    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:29.697440    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:29.697440    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:29.697440    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:29.697440    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:29.703505    5712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:44:29.703505    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:29.703505    5712 round_trippers.go:580]     Audit-Id: b04fcd21-8d69-4295-9e68-878a9948b0e4
	I0318 12:44:29.703505    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:29.703505    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:29.703505    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:29.703505    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:29.703505    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:29 GMT
	I0318 12:44:29.704493    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:30.191740    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:30.191979    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:30.191979    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:30.191979    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:30.195803    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:30.195803    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:30.196340    5712 round_trippers.go:580]     Audit-Id: c29edaca-1b98-4123-a622-957e56360b5f
	I0318 12:44:30.196340    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:30.196340    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:30.196340    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:30.196340    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:30.196472    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:30 GMT
	I0318 12:44:30.196588    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:30.197195    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:30.197195    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:30.197195    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:30.197195    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:30.201030    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:30.201030    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:30.201109    5712 round_trippers.go:580]     Audit-Id: 0eab9f9f-514f-44d8-a8c2-a86204bc2150
	I0318 12:44:30.201109    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:30.201109    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:30.201109    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:30.201109    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:30.201109    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:30 GMT
	I0318 12:44:30.201329    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:30.693710    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:30.693710    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:30.693775    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:30.693775    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:30.702563    5712 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 12:44:30.702563    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:30.702828    5712 round_trippers.go:580]     Audit-Id: 77fdf92d-f89d-44a9-a05b-7867d180cf1f
	I0318 12:44:30.702828    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:30.702828    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:30.702828    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:30.702828    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:30.702828    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:30 GMT
	I0318 12:44:30.703079    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:30.703922    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:30.703979    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:30.703979    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:30.703979    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:30.708169    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:30.708169    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:30.708169    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:30.708169    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:30 GMT
	I0318 12:44:30.708169    5712 round_trippers.go:580]     Audit-Id: 675effea-ad05-42ea-b002-7080a2628de4
	I0318 12:44:30.708169    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:30.708169    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:30.708390    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:30.708799    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:31.192382    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:31.192638    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:31.192638    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:31.192638    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:31.196436    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:31.196940    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:31.196940    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:31 GMT
	I0318 12:44:31.196940    5712 round_trippers.go:580]     Audit-Id: 05f011bb-0606-4d55-aa31-22f1ba5d74a9
	I0318 12:44:31.196940    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:31.196940    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:31.196940    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:31.196940    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:31.196940    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:31.197867    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:31.197923    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:31.197923    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:31.197923    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:31.201214    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:31.201214    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:31.201214    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:31.201214    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:31.201214    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:31.201517    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:31 GMT
	I0318 12:44:31.201517    5712 round_trippers.go:580]     Audit-Id: 5992ea35-60a3-46ee-8e9e-7321539abc76
	I0318 12:44:31.201517    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:31.201926    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:31.692932    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:31.693033    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:31.693033    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:31.693033    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:31.697758    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:31.697940    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:31.697940    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:31.698024    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:31.698024    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:31.698024    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:31 GMT
	I0318 12:44:31.698024    5712 round_trippers.go:580]     Audit-Id: c55e7335-af75-4dec-b634-eb6eec4557da
	I0318 12:44:31.698024    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:31.698305    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:31.699123    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:31.699123    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:31.699123    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:31.699123    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:31.702003    5712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:31.702513    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:31.702550    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:31.702580    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:31 GMT
	I0318 12:44:31.702580    5712 round_trippers.go:580]     Audit-Id: 459647b1-86c1-45a8-a345-1c404f6135f6
	I0318 12:44:31.702580    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:31.702580    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:31.702580    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:31.702580    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:31.703314    5712 pod_ready.go:102] pod "coredns-5dd5756b68-fgn7v" in "kube-system" namespace has status "Ready":"False"
	I0318 12:44:32.194214    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:32.194214    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:32.194214    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:32.194214    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:32.198608    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:32.198608    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:32.198608    5712 round_trippers.go:580]     Audit-Id: c6750ae0-b2f8-4d95-a5e2-1794b85fdfa9
	I0318 12:44:32.198608    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:32.198608    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:32.198608    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:32.198608    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:32.198608    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:32 GMT
	I0318 12:44:32.199230    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:32.199937    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:32.199937    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:32.199937    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:32.199937    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:32.203521    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:32.203521    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:32.203635    5712 round_trippers.go:580]     Audit-Id: acb32f20-87d5-4042-b31b-2ba23efd38d3
	I0318 12:44:32.203635    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:32.203635    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:32.203635    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:32.203635    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:32.203635    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:32 GMT
	I0318 12:44:32.204306    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:32.680219    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:32.680219    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:32.680308    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:32.680308    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:32.685079    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:32.685264    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:32.685264    5712 round_trippers.go:580]     Audit-Id: 1798e15c-0678-4db5-80ee-5182b0ae30f7
	I0318 12:44:32.685264    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:32.685264    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:32.685264    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:32.685264    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:32.685264    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:32 GMT
	I0318 12:44:32.685525    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:32.686404    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:32.686471    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:32.686471    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:32.686471    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:32.689074    5712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:32.689074    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:32.690020    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:32.690060    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:32.690060    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:32 GMT
	I0318 12:44:32.690060    5712 round_trippers.go:580]     Audit-Id: 90749611-f9ba-4355-8857-6e4c035fbd2e
	I0318 12:44:32.690060    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:32.690060    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:32.690491    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:33.194838    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:33.194838    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:33.194838    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:33.194838    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:33.199930    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:44:33.199930    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:33.199930    5712 round_trippers.go:580]     Audit-Id: 3d30d2ef-3a3f-46d0-a78c-959c1dba9928
	I0318 12:44:33.199930    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:33.199930    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:33.199930    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:33.199930    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:33.199930    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:33 GMT
	I0318 12:44:33.199930    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:33.201121    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:33.201236    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:33.201236    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:33.201236    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:33.204447    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:33.204728    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:33.204728    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:33.204728    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:33.204728    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:33 GMT
	I0318 12:44:33.204728    5712 round_trippers.go:580]     Audit-Id: 7577c64a-ff66-4949-8a72-a79b5e15d602
	I0318 12:44:33.204728    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:33.204728    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:33.204728    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:33.693743    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:33.693743    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:33.693743    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:33.693743    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:33.699262    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:44:33.699262    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:33.699262    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:33.699262    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:33.699262    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:33.699262    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:33.699262    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:33 GMT
	I0318 12:44:33.699262    5712 round_trippers.go:580]     Audit-Id: d8692647-16fb-4411-997d-4417241c566d
	I0318 12:44:33.699262    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:33.700442    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:33.700442    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:33.700522    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:33.700522    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:33.704429    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:33.705187    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:33.705187    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:33.705187    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:33.705187    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:33.705187    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:33 GMT
	I0318 12:44:33.705187    5712 round_trippers.go:580]     Audit-Id: c0c3137e-e27c-4083-9ab2-c8913db71ef9
	I0318 12:44:33.705187    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:33.705187    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:33.706096    5712 pod_ready.go:102] pod "coredns-5dd5756b68-fgn7v" in "kube-system" namespace has status "Ready":"False"
	I0318 12:44:34.180798    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:34.183822    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:34.183822    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:34.183822    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:34.187601    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:34.188131    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:34.188208    5712 round_trippers.go:580]     Audit-Id: 2b69c6c9-ece5-46a7-96ed-20d93b99d56b
	I0318 12:44:34.188208    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:34.188208    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:34.188208    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:34.188208    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:34.188208    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:34 GMT
	I0318 12:44:34.188208    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:34.188959    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:34.188959    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:34.188959    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:34.188959    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:34.193430    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:34.193646    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:34.193646    5712 round_trippers.go:580]     Audit-Id: c121a1db-2043-415d-8ee3-bec023bf88e8
	I0318 12:44:34.193646    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:34.193646    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:34.193646    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:34.193646    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:34.193646    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:34 GMT
	I0318 12:44:34.193646    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:34.681413    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:34.681413    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:34.681413    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:34.681413    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:34.685995    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:34.686919    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:34.686919    5712 round_trippers.go:580]     Audit-Id: 1aca597f-aeed-49ce-911b-1f0a40618d1d
	I0318 12:44:34.686919    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:34.686974    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:34.686974    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:34.686974    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:34.686974    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:34 GMT
	I0318 12:44:34.687201    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:34.688012    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:34.688073    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:34.688130    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:34.688130    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:34.690813    5712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:34.691451    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:34.691451    5712 round_trippers.go:580]     Audit-Id: 75e02a0b-d062-403c-b9b8-85ac7b141d25
	I0318 12:44:34.691451    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:34.691451    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:34.691451    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:34.691451    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:34.691524    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:34 GMT
	I0318 12:44:34.691829    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:35.183360    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:35.183360    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:35.183360    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:35.183360    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:35.190039    5712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:44:35.190191    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:35.190191    5712 round_trippers.go:580]     Audit-Id: 0fcd2b6e-e3e2-432c-9c01-456f86970ea2
	I0318 12:44:35.190191    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:35.190191    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:35.190191    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:35.190191    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:35.190191    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:35 GMT
	I0318 12:44:35.190546    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:35.191235    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:35.191235    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:35.191235    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:35.191235    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:35.200958    5712 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0318 12:44:35.200958    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:35.200958    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:35 GMT
	I0318 12:44:35.200958    5712 round_trippers.go:580]     Audit-Id: 4f08f624-3192-4a83-8d17-2430c00cdb13
	I0318 12:44:35.201224    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:35.201224    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:35.201224    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:35.201224    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:35.201455    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:35.692096    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:35.692096    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:35.692096    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:35.692096    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:35.696093    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:35.696093    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:35.696093    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:35 GMT
	I0318 12:44:35.696093    5712 round_trippers.go:580]     Audit-Id: 41df05e5-0a39-4364-afea-7bf683432ecd
	I0318 12:44:35.696093    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:35.696093    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:35.696093    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:35.696093    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:35.696093    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:35.697093    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:35.697093    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:35.697093    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:35.697093    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:35.701124    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:35.701124    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:35.701124    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:35.701124    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:35.701191    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:35.701191    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:35.701191    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:35 GMT
	I0318 12:44:35.701191    5712 round_trippers.go:580]     Audit-Id: b64a76c6-18c4-4946-8b07-13cdb8e11a54
	I0318 12:44:35.701649    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:36.185176    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:36.185176    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:36.185176    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:36.185176    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:36.192062    5712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:44:36.192356    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:36.192356    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:36 GMT
	I0318 12:44:36.192356    5712 round_trippers.go:580]     Audit-Id: 15d44108-2e7f-4f78-877b-ff6a558f6a23
	I0318 12:44:36.192356    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:36.192356    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:36.192356    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:36.192356    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:36.192626    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:36.192783    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:36.193314    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:36.193314    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:36.193366    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:36.197405    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:36.197405    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:36.197405    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:36 GMT
	I0318 12:44:36.197718    5712 round_trippers.go:580]     Audit-Id: 176965b7-6a70-4a66-9075-a021c56abc5a
	I0318 12:44:36.197718    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:36.197718    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:36.197718    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:36.197718    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:36.197914    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:36.198444    5712 pod_ready.go:102] pod "coredns-5dd5756b68-fgn7v" in "kube-system" namespace has status "Ready":"False"
	I0318 12:44:36.680855    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:36.680937    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:36.681011    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:36.681011    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:36.685221    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:36.685221    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:36.685221    5712 round_trippers.go:580]     Audit-Id: e8b15e15-b4c7-4495-9882-65f5e53b717c
	I0318 12:44:36.685584    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:36.685584    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:36.685584    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:36.685584    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:36.685584    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:36 GMT
	I0318 12:44:36.685887    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"1865","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 12:44:36.686605    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:36.686605    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:36.686605    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:36.686605    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:36.692101    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:44:36.692101    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:36.692101    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:36.692101    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:36.692101    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:36.692101    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:36.692101    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:36 GMT
	I0318 12:44:36.692101    5712 round_trippers.go:580]     Audit-Id: dffd16db-5eee-4d28-8446-40cfe18bab83
	I0318 12:44:36.692676    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:37.181620    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:37.181620    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:37.181620    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:37.181620    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:37.190921    5712 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0318 12:44:37.191328    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:37.191328    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:37.191328    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:37.191328    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:37 GMT
	I0318 12:44:37.191328    5712 round_trippers.go:580]     Audit-Id: 0cb190c9-721e-45d3-9526-648b92746f64
	I0318 12:44:37.191328    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:37.191328    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:37.192878    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"2048","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6723 chars]
	I0318 12:44:37.193736    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:37.193770    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:37.193770    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:37.193770    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:37.198616    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:37.198616    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:37.198616    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:37 GMT
	I0318 12:44:37.198616    5712 round_trippers.go:580]     Audit-Id: ab65a1d6-c291-43b8-ad03-c5fedcaae94f
	I0318 12:44:37.198616    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:37.198616    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:37.198616    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:37.198896    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:37.199278    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:37.682572    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgn7v
	I0318 12:44:37.682689    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:37.682689    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:37.682689    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:37.690172    5712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 12:44:37.690172    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:37.690172    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:37.690172    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:37 GMT
	I0318 12:44:37.690172    5712 round_trippers.go:580]     Audit-Id: c8fc5adf-36e3-4185-8151-237941e1352a
	I0318 12:44:37.690172    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:37.690172    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:37.690172    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:37.690172    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"2054","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6494 chars]
	I0318 12:44:37.691407    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:37.691407    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:37.691407    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:37.691407    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:37.695001    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:37.695001    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:37.695001    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:37.695001    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:37.695001    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:37.695001    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:37.695001    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:37 GMT
	I0318 12:44:37.695001    5712 round_trippers.go:580]     Audit-Id: 36831683-9c6e-41d3-946a-ea967d026971
	I0318 12:44:37.695970    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:37.695970    5712 pod_ready.go:92] pod "coredns-5dd5756b68-fgn7v" in "kube-system" namespace has status "Ready":"True"
	I0318 12:44:37.696589    5712 pod_ready.go:81] duration metric: took 26.016683s for pod "coredns-5dd5756b68-fgn7v" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:37.696589    5712 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-642600" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:37.696646    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-642600
	I0318 12:44:37.696784    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:37.696852    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:37.696852    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:37.700210    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:37.700210    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:37.700210    5712 round_trippers.go:580]     Audit-Id: 58cbf2d5-df20-4b65-8ace-75dbc407c605
	I0318 12:44:37.700210    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:37.700210    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:37.700210    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:37.700210    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:37.700210    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:37 GMT
	I0318 12:44:37.700210    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-642600","namespace":"kube-system","uid":"6f0ca14e-af4b-4442-8a48-28b69c699976","resourceVersion":"1972","creationTimestamp":"2024-03-18T12:43:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.148.129:2379","kubernetes.io/config.hash":"d5f09afee1a6ef36657c1ae3335ddda6","kubernetes.io/config.mirror":"d5f09afee1a6ef36657c1ae3335ddda6","kubernetes.io/config.seen":"2024-03-18T12:43:24.228249982Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:43:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 5873 chars]
	I0318 12:44:37.701132    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:37.701132    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:37.701132    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:37.701132    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:37.705261    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:37.705261    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:37.705261    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:37.705261    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:37.705261    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:37.705261    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:37 GMT
	I0318 12:44:37.705261    5712 round_trippers.go:580]     Audit-Id: 0e8f5552-c2ac-460d-9018-0c509cb0f965
	I0318 12:44:37.705261    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:37.705261    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:37.705847    5712 pod_ready.go:92] pod "etcd-multinode-642600" in "kube-system" namespace has status "Ready":"True"
	I0318 12:44:37.705847    5712 pod_ready.go:81] duration metric: took 9.2581ms for pod "etcd-multinode-642600" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:37.705847    5712 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-642600" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:37.705847    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-642600
	I0318 12:44:37.705847    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:37.705847    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:37.705847    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:37.709027    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:37.709027    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:37.709134    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:37 GMT
	I0318 12:44:37.709134    5712 round_trippers.go:580]     Audit-Id: fc189f3a-7220-4f47-ba34-3f2c56a72300
	I0318 12:44:37.709134    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:37.709134    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:37.709134    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:37.709134    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:37.709251    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-642600","namespace":"kube-system","uid":"ab8e6b8b-cbac-4c90-8f57-9af2760ced9c","resourceVersion":"1944","creationTimestamp":"2024-03-18T12:43:31Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.25.148.129:8443","kubernetes.io/config.hash":"624de65f019baf96d4a0e2fb6064e413","kubernetes.io/config.mirror":"624de65f019baf96d4a0e2fb6064e413","kubernetes.io/config.seen":"2024-03-18T12:43:24.228255882Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:43:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7409 chars]
	I0318 12:44:37.710305    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:37.710376    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:37.710376    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:37.710376    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:37.712672    5712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:37.712672    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:37.712672    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:37.713034    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:37.713034    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:37.713034    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:37 GMT
	I0318 12:44:37.713034    5712 round_trippers.go:580]     Audit-Id: 67a82f34-0039-46a5-9e2a-433ae69903b9
	I0318 12:44:37.713034    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:37.713375    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:37.713736    5712 pod_ready.go:92] pod "kube-apiserver-multinode-642600" in "kube-system" namespace has status "Ready":"True"
	I0318 12:44:37.713736    5712 pod_ready.go:81] duration metric: took 7.8892ms for pod "kube-apiserver-multinode-642600" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:37.713736    5712 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-642600" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:37.713995    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-642600
	I0318 12:44:37.714092    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:37.714092    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:37.714157    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:37.716468    5712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:37.716468    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:37.716468    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:37.716468    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:37.716468    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:37 GMT
	I0318 12:44:37.716468    5712 round_trippers.go:580]     Audit-Id: 84381428-69d2-4e01-9fc2-08bb6c474a3a
	I0318 12:44:37.716468    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:37.716468    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:37.717414    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-642600","namespace":"kube-system","uid":"1dd2a576-c5a0-44e5-b194-545e8b18962c","resourceVersion":"1976","creationTimestamp":"2024-03-18T12:18:51Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a1608bc774d0b3e96e1b6fbbded5cb52","kubernetes.io/config.mirror":"a1608bc774d0b3e96e1b6fbbded5cb52","kubernetes.io/config.seen":"2024-03-18T12:18:50.896437006Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:18:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7179 chars]
	I0318 12:44:37.717414    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:37.717414    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:37.717414    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:37.717414    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:37.720290    5712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:37.721075    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:37.721075    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:37.721141    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:37.721141    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:37.721141    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:37.721141    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:37 GMT
	I0318 12:44:37.721268    5712 round_trippers.go:580]     Audit-Id: 1960dc73-4e58-4d0a-a800-653714b542b6
	I0318 12:44:37.721519    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:37.722106    5712 pod_ready.go:92] pod "kube-controller-manager-multinode-642600" in "kube-system" namespace has status "Ready":"True"
	I0318 12:44:37.722227    5712 pod_ready.go:81] duration metric: took 8.3696ms for pod "kube-controller-manager-multinode-642600" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:37.722227    5712 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4dg79" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:37.722437    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4dg79
	I0318 12:44:37.722510    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:37.722510    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:37.722562    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:37.726868    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:37.726868    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:37.726868    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:37.726868    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:37.726868    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:37 GMT
	I0318 12:44:37.726868    5712 round_trippers.go:580]     Audit-Id: dd7b971e-e716-44f2-8184-4375f57c8a3b
	I0318 12:44:37.726868    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:37.726868    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:37.727356    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4dg79","generateName":"kube-proxy-","namespace":"kube-system","uid":"449242c2-ad12-4da5-b339-3be7ab8a9b16","resourceVersion":"1871","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"158ddb85-85d3-4864-bdec-d4555b6c7434","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"158ddb85-85d3-4864-bdec-d4555b6c7434\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5743 chars]
	I0318 12:44:37.728352    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:37.728352    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:37.728352    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:37.728352    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:37.731347    5712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:37.731347    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:37.731347    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:37.731619    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:37.731619    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:37.731619    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:37 GMT
	I0318 12:44:37.731619    5712 round_trippers.go:580]     Audit-Id: cfb72b6a-73ad-456c-82a4-8b95e610267a
	I0318 12:44:37.731619    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:37.731934    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:37.732330    5712 pod_ready.go:92] pod "kube-proxy-4dg79" in "kube-system" namespace has status "Ready":"True"
	I0318 12:44:37.732330    5712 pod_ready.go:81] duration metric: took 10.046ms for pod "kube-proxy-4dg79" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:37.732330    5712 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-khbjt" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:37.887622    5712 request.go:629] Waited for 155.2912ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/kube-proxy-khbjt
	I0318 12:44:37.887887    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/kube-proxy-khbjt
	I0318 12:44:37.887951    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:37.887951    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:37.887951    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:37.893530    5712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:44:37.893530    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:37.893619    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:37 GMT
	I0318 12:44:37.893619    5712 round_trippers.go:580]     Audit-Id: b2dd9516-d217-4a89-88bc-6060a711ccac
	I0318 12:44:37.893639    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:37.893639    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:37.893639    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:37.893639    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:37.894001    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-khbjt","generateName":"kube-proxy-","namespace":"kube-system","uid":"594efa46-7e30-40e6-92dd-9c9c80bc787a","resourceVersion":"1825","creationTimestamp":"2024-03-18T12:27:09Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"158ddb85-85d3-4864-bdec-d4555b6c7434","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:27:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"158ddb85-85d3-4864-bdec-d4555b6c7434\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5771 chars]
	I0318 12:44:38.091944    5712 request.go:629] Waited for 197.5062ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.129:8443/api/v1/nodes/multinode-642600-m03
	I0318 12:44:38.092032    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600-m03
	I0318 12:44:38.092323    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:38.092405    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:38.092405    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:38.095865    5712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:38.096657    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:38.096657    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:38.096657    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:38 GMT
	I0318 12:44:38.096657    5712 round_trippers.go:580]     Audit-Id: e2072c20-1fa0-4ee1-b265-59b232f06eb6
	I0318 12:44:38.096657    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:38.096657    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:38.096657    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:38.096927    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m03","uid":"e9bc5257-e8c0-493d-a533-c2a8a832d45e","resourceVersion":"1992","creationTimestamp":"2024-03-18T12:38:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_38_47_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:38:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4400 chars]
	I0318 12:44:38.097511    5712 pod_ready.go:97] node "multinode-642600-m03" hosting pod "kube-proxy-khbjt" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-642600-m03" has status "Ready":"Unknown"
	I0318 12:44:38.097659    5712 pod_ready.go:81] duration metric: took 365.3266ms for pod "kube-proxy-khbjt" in "kube-system" namespace to be "Ready" ...
	E0318 12:44:38.097659    5712 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-642600-m03" hosting pod "kube-proxy-khbjt" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-642600-m03" has status "Ready":"Unknown"
	I0318 12:44:38.097659    5712 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vts9f" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:38.295471    5712 request.go:629] Waited for 197.3092ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vts9f
	I0318 12:44:38.295660    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vts9f
	I0318 12:44:38.295660    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:38.295660    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:38.295660    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:38.300505    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:38.300505    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:38.300505    5712 round_trippers.go:580]     Audit-Id: f8a44f84-dfc4-47d2-93b8-d3f1effc3787
	I0318 12:44:38.300505    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:38.300505    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:38.300505    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:38.300505    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:38.301381    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:38 GMT
	I0318 12:44:38.302269    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vts9f","generateName":"kube-proxy-","namespace":"kube-system","uid":"9545be8f-07fd-49dd-99bd-e9e976e65e7b","resourceVersion":"2032","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"158ddb85-85d3-4864-bdec-d4555b6c7434","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"158ddb85-85d3-4864-bdec-d4555b6c7434\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5771 chars]
	I0318 12:44:38.482633    5712 request.go:629] Waited for 179.5233ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.129:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:44:38.482860    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600-m02
	I0318 12:44:38.483005    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:38.483005    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:38.483005    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:38.487439    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:38.487439    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:38.487711    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:38.487711    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:38.487711    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:38.487711    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:38.487711    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:38 GMT
	I0318 12:44:38.487711    5712 round_trippers.go:580]     Audit-Id: 4426fb33-3c5e-443c-9ac3-5aac8865b391
	I0318 12:44:38.488292    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600-m02","uid":"93581133-1f04-49ae-bd62-0ecc4f7796cb","resourceVersion":"2040","creationTimestamp":"2024-03-18T12:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_22_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4583 chars]
	I0318 12:44:38.488767    5712 pod_ready.go:97] node "multinode-642600-m02" hosting pod "kube-proxy-vts9f" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-642600-m02" has status "Ready":"Unknown"
	I0318 12:44:38.488837    5712 pod_ready.go:81] duration metric: took 391.1756ms for pod "kube-proxy-vts9f" in "kube-system" namespace to be "Ready" ...
	E0318 12:44:38.488837    5712 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-642600-m02" hosting pod "kube-proxy-vts9f" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-642600-m02" has status "Ready":"Unknown"
	I0318 12:44:38.488837    5712 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-642600" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:38.685928    5712 request.go:629] Waited for 196.8223ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-642600
	I0318 12:44:38.686345    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-642600
	I0318 12:44:38.686345    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:38.686345    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:38.686460    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:38.691410    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:38.691410    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:38.691410    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:38.691410    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:38.691410    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:38 GMT
	I0318 12:44:38.691410    5712 round_trippers.go:580]     Audit-Id: 820e668c-1131-49a1-b2d2-fc96c4698517
	I0318 12:44:38.691410    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:38.691410    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:38.691410    5712 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-642600","namespace":"kube-system","uid":"52e29d3b-d6e9-4109-916d-74123a2ab190","resourceVersion":"1955","creationTimestamp":"2024-03-18T12:18:51Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cf50844b540be8ed0b3e767db413ac8f","kubernetes.io/config.mirror":"cf50844b540be8ed0b3e767db413ac8f","kubernetes.io/config.seen":"2024-03-18T12:18:50.896438106Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:18:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4909 chars]
	I0318 12:44:38.889238    5712 request.go:629] Waited for 196.7125ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:38.889449    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes/multinode-642600
	I0318 12:44:38.889449    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:38.889449    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:38.889541    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:38.893888    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:38.893888    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:38.893888    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:38.893888    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:38.893888    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:38.893888    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:38 GMT
	I0318 12:44:38.893888    5712 round_trippers.go:580]     Audit-Id: 3276ce98-f6ba-49b9-8f1f-3889cbb8318b
	I0318 12:44:38.893888    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:38.893888    5712 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:18:46Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 12:44:38.894599    5712 pod_ready.go:92] pod "kube-scheduler-multinode-642600" in "kube-system" namespace has status "Ready":"True"
	I0318 12:44:38.894599    5712 pod_ready.go:81] duration metric: took 405.6364ms for pod "kube-scheduler-multinode-642600" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:38.894599    5712 pod_ready.go:38] duration metric: took 27.2268168s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 12:44:38.894599    5712 api_server.go:52] waiting for apiserver process to appear ...
	I0318 12:44:38.906171    5712 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 12:44:38.935847    5712 command_runner.go:130] > a48a6d310b86
	I0318 12:44:38.935993    5712 logs.go:276] 1 containers: [a48a6d310b86]
	I0318 12:44:38.946472    5712 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 12:44:38.970744    5712 command_runner.go:130] > 8e7911b58c58
	I0318 12:44:38.971709    5712 logs.go:276] 1 containers: [8e7911b58c58]
	I0318 12:44:38.982095    5712 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 12:44:39.006125    5712 command_runner.go:130] > fcf17db92b35
	I0318 12:44:39.006125    5712 command_runner.go:130] > e81f1d2fdb36
	I0318 12:44:39.006125    5712 logs.go:276] 2 containers: [fcf17db92b35 e81f1d2fdb36]
	I0318 12:44:39.016676    5712 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 12:44:39.047010    5712 command_runner.go:130] > bd1e4f4d262e
	I0318 12:44:39.047113    5712 command_runner.go:130] > 47777d4c0b90
	I0318 12:44:39.047113    5712 logs.go:276] 2 containers: [bd1e4f4d262e 47777d4c0b90]
	I0318 12:44:39.057608    5712 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 12:44:39.083081    5712 command_runner.go:130] > 575b41a3a85a
	I0318 12:44:39.083081    5712 command_runner.go:130] > 4bbad08fe59a
	I0318 12:44:39.083790    5712 logs.go:276] 2 containers: [575b41a3a85a 4bbad08fe59a]
	I0318 12:44:39.093122    5712 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 12:44:39.118727    5712 command_runner.go:130] > 14ae9398d33b
	I0318 12:44:39.118727    5712 command_runner.go:130] > a54be4436901
	I0318 12:44:39.118827    5712 logs.go:276] 2 containers: [14ae9398d33b a54be4436901]
	I0318 12:44:39.129760    5712 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 12:44:39.159256    5712 command_runner.go:130] > 9fec05a61d2a
	I0318 12:44:39.159256    5712 command_runner.go:130] > 5cf42651cb21
	I0318 12:44:39.160324    5712 logs.go:276] 2 containers: [9fec05a61d2a 5cf42651cb21]
	I0318 12:44:39.160401    5712 logs.go:123] Gathering logs for kindnet [5cf42651cb21] ...
	I0318 12:44:39.160496    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf42651cb21"
	I0318 12:44:39.200666    5712 command_runner.go:130] ! I0318 12:29:43.278241       1 main.go:227] handling current node
	I0318 12:44:39.200992    5712 command_runner.go:130] ! I0318 12:29:43.278258       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.200992    5712 command_runner.go:130] ! I0318 12:29:43.278267       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:29:43.279034       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:29:43.279112       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:29:53.290788       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:29:53.290919       1 main.go:227] handling current node
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:29:53.290935       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:29:53.290944       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:29:53.291443       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:29:53.291608       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:03.307097       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:03.307405       1 main.go:227] handling current node
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:03.307624       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:03.307713       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:03.307989       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:03.308095       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:13.315412       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:13.315512       1 main.go:227] handling current node
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:13.315528       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:13.315537       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:13.316187       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:13.316277       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:23.331223       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:23.331328       1 main.go:227] handling current node
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:23.331344       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:23.331352       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:23.331895       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:23.332071       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:33.338821       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:33.338848       1 main.go:227] handling current node
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:33.338860       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:33.338866       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:33.339004       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:33.339017       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:43.354041       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:43.354126       1 main.go:227] handling current node
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:43.354142       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:43.354153       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:43.354280       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:43.354293       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:53.362056       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:53.362198       1 main.go:227] handling current node
	I0318 12:44:39.201030    5712 command_runner.go:130] ! I0318 12:30:53.362230       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.201629    5712 command_runner.go:130] ! I0318 12:30:53.362239       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.201629    5712 command_runner.go:130] ! I0318 12:30:53.362887       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.201629    5712 command_runner.go:130] ! I0318 12:30:53.363194       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.201714    5712 command_runner.go:130] ! I0318 12:31:03.378995       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.201714    5712 command_runner.go:130] ! I0318 12:31:03.379039       1 main.go:227] handling current node
	I0318 12:44:39.201714    5712 command_runner.go:130] ! I0318 12:31:03.379096       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.201714    5712 command_runner.go:130] ! I0318 12:31:03.379108       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.201794    5712 command_runner.go:130] ! I0318 12:31:03.379432       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.201794    5712 command_runner.go:130] ! I0318 12:31:03.379450       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.201794    5712 command_runner.go:130] ! I0318 12:31:13.392082       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.201794    5712 command_runner.go:130] ! I0318 12:31:13.392188       1 main.go:227] handling current node
	I0318 12:44:39.201861    5712 command_runner.go:130] ! I0318 12:31:13.392224       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.201861    5712 command_runner.go:130] ! I0318 12:31:13.392249       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.201861    5712 command_runner.go:130] ! I0318 12:31:13.392820       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.201961    5712 command_runner.go:130] ! I0318 12:31:13.392974       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.201961    5712 command_runner.go:130] ! I0318 12:31:23.402269       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.201994    5712 command_runner.go:130] ! I0318 12:31:23.402391       1 main.go:227] handling current node
	I0318 12:44:39.201994    5712 command_runner.go:130] ! I0318 12:31:23.402408       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.201994    5712 command_runner.go:130] ! I0318 12:31:23.402417       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.201994    5712 command_runner.go:130] ! I0318 12:31:23.403188       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.202056    5712 command_runner.go:130] ! I0318 12:31:23.403223       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.202080    5712 command_runner.go:130] ! I0318 12:31:33.413396       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:31:33.413577       1 main.go:227] handling current node
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:31:33.413639       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:31:33.413654       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:31:33.414293       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:31:33.414437       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:31:43.424274       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:31:43.424320       1 main.go:227] handling current node
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:31:43.424332       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:31:43.424339       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:31:43.424591       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:31:43.424608       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:31:53.433473       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:31:53.433591       1 main.go:227] handling current node
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:31:53.433607       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:31:53.433615       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:31:53.433851       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:31:53.433959       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:32:03.443363       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:32:03.443411       1 main.go:227] handling current node
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:32:03.443424       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:32:03.443450       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:32:03.444602       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:32:03.445390       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:32:13.460166       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:32:13.460215       1 main.go:227] handling current node
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:32:13.460229       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:32:13.460237       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:32:13.460679       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:32:13.460697       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:32:23.479958       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:32:23.480007       1 main.go:227] handling current node
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:32:23.480024       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:32:23.480032       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:32:23.480521       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:32:23.480578       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:32:33.491143       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:32:33.491190       1 main.go:227] handling current node
	I0318 12:44:39.202104    5712 command_runner.go:130] ! I0318 12:32:33.491204       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.202647    5712 command_runner.go:130] ! I0318 12:32:33.491211       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.202647    5712 command_runner.go:130] ! I0318 12:32:33.491340       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.202688    5712 command_runner.go:130] ! I0318 12:32:33.491369       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.202738    5712 command_runner.go:130] ! I0318 12:32:43.505355       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.202738    5712 command_runner.go:130] ! I0318 12:32:43.505474       1 main.go:227] handling current node
	I0318 12:44:39.202815    5712 command_runner.go:130] ! I0318 12:32:43.505490       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.202815    5712 command_runner.go:130] ! I0318 12:32:43.505498       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.202815    5712 command_runner.go:130] ! I0318 12:32:43.505666       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.202815    5712 command_runner.go:130] ! I0318 12:32:43.505696       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.202898    5712 command_runner.go:130] ! I0318 12:32:53.513310       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.202898    5712 command_runner.go:130] ! I0318 12:32:53.513340       1 main.go:227] handling current node
	I0318 12:44:39.202898    5712 command_runner.go:130] ! I0318 12:32:53.513350       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.202898    5712 command_runner.go:130] ! I0318 12:32:53.513357       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.202898    5712 command_runner.go:130] ! I0318 12:32:53.513783       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.202986    5712 command_runner.go:130] ! I0318 12:32:53.513865       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.202986    5712 command_runner.go:130] ! I0318 12:33:03.527897       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.202986    5712 command_runner.go:130] ! I0318 12:33:03.528343       1 main.go:227] handling current node
	I0318 12:44:39.202986    5712 command_runner.go:130] ! I0318 12:33:03.528485       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.202986    5712 command_runner.go:130] ! I0318 12:33:03.528785       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.203075    5712 command_runner.go:130] ! I0318 12:33:03.529110       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.203097    5712 command_runner.go:130] ! I0318 12:33:03.529205       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.203097    5712 command_runner.go:130] ! I0318 12:33:13.538048       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.203097    5712 command_runner.go:130] ! I0318 12:33:13.538183       1 main.go:227] handling current node
	I0318 12:44:39.203097    5712 command_runner.go:130] ! I0318 12:33:13.538222       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.203163    5712 command_runner.go:130] ! I0318 12:33:13.538317       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.203163    5712 command_runner.go:130] ! I0318 12:33:13.538750       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.203223    5712 command_runner.go:130] ! I0318 12:33:13.538888       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.203223    5712 command_runner.go:130] ! I0318 12:33:23.555771       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:23.555820       1 main.go:227] handling current node
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:23.555895       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:23.555905       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:23.556511       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:23.556780       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:33.566023       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:33.566190       1 main.go:227] handling current node
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:33.566208       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:33.566217       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:33.566931       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:33.567031       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:43.581332       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:43.581382       1 main.go:227] handling current node
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:43.581449       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:43.581482       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:43.582063       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:43.582166       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:53.588426       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:53.588602       1 main.go:227] handling current node
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:53.588619       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:53.588628       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:53.588919       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:33:53.588937       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:34:03.604902       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:34:03.605007       1 main.go:227] handling current node
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:34:03.605023       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:34:03.605032       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:34:03.605612       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:34:03.605696       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:34:13.618369       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:34:13.618488       1 main.go:227] handling current node
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:34:13.618585       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:34:13.618604       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:34:13.618738       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.203249    5712 command_runner.go:130] ! I0318 12:34:13.618747       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:23.626772       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:23.626887       1 main.go:227] handling current node
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:23.626903       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:23.626911       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:23.627415       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:23.627448       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:33.644122       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:33.644215       1 main.go:227] handling current node
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:33.644233       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:33.644757       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:33.645128       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:33.645240       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:43.661684       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:43.661731       1 main.go:227] handling current node
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:43.661744       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:43.661751       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:43.662532       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:43.662645       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:53.676649       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:53.677242       1 main.go:227] handling current node
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:53.677518       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:53.677631       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:53.677873       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:34:53.677905       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:03.685328       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:03.685457       1 main.go:227] handling current node
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:03.685474       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:03.685483       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:03.685861       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:03.686001       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:13.702673       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:13.702782       1 main.go:227] handling current node
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:13.702801       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:13.703456       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:13.703827       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:13.703864       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:23.711167       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:23.711370       1 main.go:227] handling current node
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:23.711388       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:23.711398       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:23.712127       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:23.712222       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:33.724041       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:33.724810       1 main.go:227] handling current node
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:33.724973       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:33.725045       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:33.725458       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:33.725875       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:43.740216       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:43.740493       1 main.go:227] handling current node
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:43.740511       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:43.740520       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:43.741453       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:43.741584       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:53.748632       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:53.749163       1 main.go:227] handling current node
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:53.749285       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:53.749498       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:53.749815       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:35:53.749904       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:36:03.765208       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:36:03.765326       1 main.go:227] handling current node
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:36:03.765343       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:36:03.765351       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:36:03.765883       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:36:03.766028       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:36:13.775221       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:36:13.775396       1 main.go:227] handling current node
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:36:13.775430       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:36:13.775502       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:36:13.776058       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:36:13.776177       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:36:23.790073       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:36:23.790179       1 main.go:227] handling current node
	I0318 12:44:39.203695    5712 command_runner.go:130] ! I0318 12:36:23.790195       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:36:23.790207       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:36:23.790761       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:36:23.790798       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:36:33.800116       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:36:33.800240       1 main.go:227] handling current node
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:36:33.800256       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:36:33.800265       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:36:33.800837       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:36:33.800858       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:36:43.817961       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:36:43.818115       1 main.go:227] handling current node
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:36:43.818132       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:36:43.818146       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:36:43.818537       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:36:43.818661       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:36:53.827340       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:36:53.827385       1 main.go:227] handling current node
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:36:53.827398       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:36:53.827406       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:36:53.827787       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:36:53.827885       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:03.840761       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:03.840837       1 main.go:227] handling current node
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:03.840851       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:03.840859       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:03.841285       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:03.841319       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:13.848127       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:13.848174       1 main.go:227] handling current node
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:13.848188       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:13.848195       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:13.848630       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:13.848646       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:23.863745       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:23.863916       1 main.go:227] handling current node
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:23.863950       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:23.863996       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:23.864419       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:23.864510       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:33.876214       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:33.876331       1 main.go:227] handling current node
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:33.876347       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:33.876355       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:33.877021       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:33.877100       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:43.886399       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:43.886544       1 main.go:227] handling current node
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:43.886626       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:43.886636       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:43.886872       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:43.886890       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:53.903761       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:53.903845       1 main.go:227] handling current node
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:53.903871       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:53.903880       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:53.905033       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.204693    5712 command_runner.go:130] ! I0318 12:37:53.905181       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.205415    5712 command_runner.go:130] ! I0318 12:38:03.919532       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.205415    5712 command_runner.go:130] ! I0318 12:38:03.919783       1 main.go:227] handling current node
	I0318 12:44:39.205415    5712 command_runner.go:130] ! I0318 12:38:03.919840       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.205415    5712 command_runner.go:130] ! I0318 12:38:03.919894       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.205415    5712 command_runner.go:130] ! I0318 12:38:03.920221       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.205415    5712 command_runner.go:130] ! I0318 12:38:03.920390       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.205518    5712 command_runner.go:130] ! I0318 12:38:13.927894       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.205518    5712 command_runner.go:130] ! I0318 12:38:13.928004       1 main.go:227] handling current node
	I0318 12:44:39.205518    5712 command_runner.go:130] ! I0318 12:38:13.928022       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.205518    5712 command_runner.go:130] ! I0318 12:38:13.928031       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.205518    5712 command_runner.go:130] ! I0318 12:38:13.928232       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.205601    5712 command_runner.go:130] ! I0318 12:38:13.928269       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.205601    5712 command_runner.go:130] ! I0318 12:38:23.943692       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.205601    5712 command_runner.go:130] ! I0318 12:38:23.943780       1 main.go:227] handling current node
	I0318 12:44:39.205601    5712 command_runner.go:130] ! I0318 12:38:23.943795       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.205601    5712 command_runner.go:130] ! I0318 12:38:23.943804       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.205601    5712 command_runner.go:130] ! I0318 12:38:23.944523       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.205676    5712 command_runner.go:130] ! I0318 12:38:23.944596       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.205676    5712 command_runner.go:130] ! I0318 12:38:33.952000       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.205676    5712 command_runner.go:130] ! I0318 12:38:33.952098       1 main.go:227] handling current node
	I0318 12:44:39.205676    5712 command_runner.go:130] ! I0318 12:38:33.952114       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.205753    5712 command_runner.go:130] ! I0318 12:38:33.952123       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.205753    5712 command_runner.go:130] ! I0318 12:38:33.952466       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:39.205753    5712 command_runner.go:130] ! I0318 12:38:33.952503       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:39.205753    5712 command_runner.go:130] ! I0318 12:38:43.965979       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.205753    5712 command_runner.go:130] ! I0318 12:38:43.966101       1 main.go:227] handling current node
	I0318 12:44:39.205753    5712 command_runner.go:130] ! I0318 12:38:43.966117       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.205844    5712 command_runner.go:130] ! I0318 12:38:43.966125       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.205844    5712 command_runner.go:130] ! I0318 12:38:53.989210       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.205844    5712 command_runner.go:130] ! I0318 12:38:53.989308       1 main.go:227] handling current node
	I0318 12:44:39.205844    5712 command_runner.go:130] ! I0318 12:38:53.989322       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.205844    5712 command_runner.go:130] ! I0318 12:38:53.989373       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.205844    5712 command_runner.go:130] ! I0318 12:38:53.989864       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:39.205844    5712 command_runner.go:130] ! I0318 12:38:53.989957       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:39.205844    5712 command_runner.go:130] ! I0318 12:38:53.990028       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.25.157.200 Flags: [] Table: 0} 
	I0318 12:44:39.205844    5712 command_runner.go:130] ! I0318 12:39:03.996429       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.205844    5712 command_runner.go:130] ! I0318 12:39:03.996598       1 main.go:227] handling current node
	I0318 12:44:39.205977    5712 command_runner.go:130] ! I0318 12:39:03.996614       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.205977    5712 command_runner.go:130] ! I0318 12:39:03.996623       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.205977    5712 command_runner.go:130] ! I0318 12:39:03.996739       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:39.205977    5712 command_runner.go:130] ! I0318 12:39:03.996753       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:39.205977    5712 command_runner.go:130] ! I0318 12:39:14.008318       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.206050    5712 command_runner.go:130] ! I0318 12:39:14.008384       1 main.go:227] handling current node
	I0318 12:44:39.206050    5712 command_runner.go:130] ! I0318 12:39:14.008398       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.206050    5712 command_runner.go:130] ! I0318 12:39:14.008405       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.206050    5712 command_runner.go:130] ! I0318 12:39:14.009080       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:39.206115    5712 command_runner.go:130] ! I0318 12:39:14.009179       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:39.206115    5712 command_runner.go:130] ! I0318 12:39:24.016154       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.206171    5712 command_runner.go:130] ! I0318 12:39:24.016315       1 main.go:227] handling current node
	I0318 12:44:39.206171    5712 command_runner.go:130] ! I0318 12:39:24.016330       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.206209    5712 command_runner.go:130] ! I0318 12:39:24.016338       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.206209    5712 command_runner.go:130] ! I0318 12:39:24.016842       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:39.206209    5712 command_runner.go:130] ! I0318 12:39:24.016875       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:39.206209    5712 command_runner.go:130] ! I0318 12:39:34.029061       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.206281    5712 command_runner.go:130] ! I0318 12:39:34.029159       1 main.go:227] handling current node
	I0318 12:44:39.206305    5712 command_runner.go:130] ! I0318 12:39:34.029175       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:39:34.029184       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:39:34.030103       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:39:34.030216       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:39:44.037921       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:39:44.037960       1 main.go:227] handling current node
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:39:44.037972       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:39:44.037981       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:39:44.038243       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:39:44.038318       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:39:54.057786       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:39:54.058021       1 main.go:227] handling current node
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:39:54.058100       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:39:54.058189       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:39:54.058376       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:39:54.058478       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:40:04.067119       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:40:04.067262       1 main.go:227] handling current node
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:40:04.067280       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:40:04.067289       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:40:04.067742       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:40:04.067846       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:40:14.082426       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:40:14.082921       1 main.go:227] handling current node
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:40:14.082946       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:40:14.082956       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:40:14.083174       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:40:14.083247       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:40:24.098060       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:40:24.098161       1 main.go:227] handling current node
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:40:24.098178       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:40:24.098187       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:40:24.098316       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:40:24.098324       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:40:34.335103       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:40:34.335169       1 main.go:227] handling current node
	I0318 12:44:39.206334    5712 command_runner.go:130] ! I0318 12:40:34.335185       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.206880    5712 command_runner.go:130] ! I0318 12:40:34.335192       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.206880    5712 command_runner.go:130] ! I0318 12:40:34.335470       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:39.206880    5712 command_runner.go:130] ! I0318 12:40:34.335488       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:39.206880    5712 command_runner.go:130] ! I0318 12:40:44.342962       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:39.206880    5712 command_runner.go:130] ! I0318 12:40:44.343122       1 main.go:227] handling current node
	I0318 12:44:39.206880    5712 command_runner.go:130] ! I0318 12:40:44.343139       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.206880    5712 command_runner.go:130] ! I0318 12:40:44.343148       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.206880    5712 command_runner.go:130] ! I0318 12:40:44.343738       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:39.206880    5712 command_runner.go:130] ! I0318 12:40:44.343780       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:39.227790    5712 logs.go:123] Gathering logs for dmesg ...
	I0318 12:44:39.228795    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 12:44:39.253573    5712 command_runner.go:130] > [Mar18 12:41] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0318 12:44:39.253573    5712 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0318 12:44:39.253573    5712 command_runner.go:130] > [  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0318 12:44:39.253573    5712 command_runner.go:130] > [  +0.129398] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0318 12:44:39.253573    5712 command_runner.go:130] > [  +0.023142] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0318 12:44:39.253573    5712 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0318 12:44:39.253573    5712 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0318 12:44:39.253573    5712 command_runner.go:130] > [  +0.067111] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0318 12:44:39.253573    5712 command_runner.go:130] > [  +0.023049] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0318 12:44:39.253573    5712 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0318 12:44:39.253573    5712 command_runner.go:130] > [  +5.633479] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0318 12:44:39.253573    5712 command_runner.go:130] > [  +0.746575] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0318 12:44:39.253573    5712 command_runner.go:130] > [  +1.948336] systemd-fstab-generator[113]: Ignoring "noauto" option for root device
	I0318 12:44:39.253573    5712 command_runner.go:130] > [  +7.356358] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0318 12:44:39.253573    5712 command_runner.go:130] > [  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0318 12:44:39.253573    5712 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0318 12:44:39.253573    5712 command_runner.go:130] > [Mar18 12:42] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	I0318 12:44:39.253573    5712 command_runner.go:130] > [  +0.196447] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	I0318 12:44:39.253573    5712 command_runner.go:130] > [Mar18 12:43] systemd-fstab-generator[969]: Ignoring "noauto" option for root device
	I0318 12:44:39.253573    5712 command_runner.go:130] > [  +0.116812] kauditd_printk_skb: 73 callbacks suppressed
	I0318 12:44:39.253573    5712 command_runner.go:130] > [  +0.565179] systemd-fstab-generator[1008]: Ignoring "noauto" option for root device
	I0318 12:44:39.253573    5712 command_runner.go:130] > [  +0.224131] systemd-fstab-generator[1020]: Ignoring "noauto" option for root device
	I0318 12:44:39.253573    5712 command_runner.go:130] > [  +0.243543] systemd-fstab-generator[1034]: Ignoring "noauto" option for root device
	I0318 12:44:39.253573    5712 command_runner.go:130] > [  +2.986318] systemd-fstab-generator[1219]: Ignoring "noauto" option for root device
	I0318 12:44:39.254578    5712 command_runner.go:130] > [  +0.197212] systemd-fstab-generator[1231]: Ignoring "noauto" option for root device
	I0318 12:44:39.254578    5712 command_runner.go:130] > [  +0.228503] systemd-fstab-generator[1243]: Ignoring "noauto" option for root device
	I0318 12:44:39.254578    5712 command_runner.go:130] > [  +0.297734] systemd-fstab-generator[1258]: Ignoring "noauto" option for root device
	I0318 12:44:39.254578    5712 command_runner.go:130] > [  +0.969011] systemd-fstab-generator[1381]: Ignoring "noauto" option for root device
	I0318 12:44:39.254578    5712 command_runner.go:130] > [  +0.114690] kauditd_printk_skb: 205 callbacks suppressed
	I0318 12:44:39.254578    5712 command_runner.go:130] > [  +3.575437] systemd-fstab-generator[1516]: Ignoring "noauto" option for root device
	I0318 12:44:39.254578    5712 command_runner.go:130] > [  +1.537938] kauditd_printk_skb: 44 callbacks suppressed
	I0318 12:44:39.254675    5712 command_runner.go:130] > [  +6.654182] kauditd_printk_skb: 30 callbacks suppressed
	I0318 12:44:39.254675    5712 command_runner.go:130] > [  +4.384606] systemd-fstab-generator[2563]: Ignoring "noauto" option for root device
	I0318 12:44:39.254675    5712 command_runner.go:130] > [  +7.200668] kauditd_printk_skb: 70 callbacks suppressed
	I0318 12:44:39.256303    5712 logs.go:123] Gathering logs for kube-scheduler [bd1e4f4d262e] ...
	I0318 12:44:39.256303    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd1e4f4d262e"
	I0318 12:44:39.296399    5712 command_runner.go:130] ! I0318 12:43:27.649061       1 serving.go:348] Generated self-signed cert in-memory
	I0318 12:44:39.296399    5712 command_runner.go:130] ! W0318 12:43:30.548831       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0318 12:44:39.296399    5712 command_runner.go:130] ! W0318 12:43:30.549092       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 12:44:39.296399    5712 command_runner.go:130] ! W0318 12:43:30.549282       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0318 12:44:39.296399    5712 command_runner.go:130] ! W0318 12:43:30.549461       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 12:44:39.296891    5712 command_runner.go:130] ! I0318 12:43:30.613305       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0318 12:44:39.296923    5712 command_runner.go:130] ! I0318 12:43:30.613417       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:39.296923    5712 command_runner.go:130] ! I0318 12:43:30.618512       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 12:44:39.296923    5712 command_runner.go:130] ! I0318 12:43:30.619171       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0318 12:44:39.296923    5712 command_runner.go:130] ! I0318 12:43:30.619276       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 12:44:39.296923    5712 command_runner.go:130] ! I0318 12:43:30.620071       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 12:44:39.296923    5712 command_runner.go:130] ! I0318 12:43:30.720411       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 12:44:39.299846    5712 logs.go:123] Gathering logs for kube-scheduler [47777d4c0b90] ...
	I0318 12:44:39.299902    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47777d4c0b90"
	I0318 12:44:39.335743    5712 command_runner.go:130] ! I0318 12:18:43.828879       1 serving.go:348] Generated self-signed cert in-memory
	I0318 12:44:39.335809    5712 command_runner.go:130] ! W0318 12:18:46.562226       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0318 12:44:39.335809    5712 command_runner.go:130] ! W0318 12:18:46.562618       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 12:44:39.335809    5712 command_runner.go:130] ! W0318 12:18:46.562705       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0318 12:44:39.335809    5712 command_runner.go:130] ! W0318 12:18:46.562793       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 12:44:39.335809    5712 command_runner.go:130] ! I0318 12:18:46.615857       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0318 12:44:39.335809    5712 command_runner.go:130] ! I0318 12:18:46.615957       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:39.335809    5712 command_runner.go:130] ! I0318 12:18:46.622177       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 12:44:39.335809    5712 command_runner.go:130] ! I0318 12:18:46.622201       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 12:44:39.335809    5712 command_runner.go:130] ! I0318 12:18:46.625084       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0318 12:44:39.335809    5712 command_runner.go:130] ! I0318 12:18:46.625162       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 12:44:39.335809    5712 command_runner.go:130] ! W0318 12:18:46.631110       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 12:44:39.335809    5712 command_runner.go:130] ! E0318 12:18:46.631164       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 12:44:39.335809    5712 command_runner.go:130] ! W0318 12:18:46.634891       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 12:44:39.335809    5712 command_runner.go:130] ! E0318 12:18:46.634917       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 12:44:39.335809    5712 command_runner.go:130] ! W0318 12:18:46.636313       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0318 12:44:39.335809    5712 command_runner.go:130] ! E0318 12:18:46.638655       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0318 12:44:39.335809    5712 command_runner.go:130] ! W0318 12:18:46.636730       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:39.336392    5712 command_runner.go:130] ! E0318 12:18:46.639099       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:39.336448    5712 command_runner.go:130] ! W0318 12:18:46.636905       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:39.336483    5712 command_runner.go:130] ! E0318 12:18:46.639254       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:39.336483    5712 command_runner.go:130] ! W0318 12:18:46.636986       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:39.336483    5712 command_runner.go:130] ! E0318 12:18:46.639495       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:39.336587    5712 command_runner.go:130] ! W0318 12:18:46.641683       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 12:44:39.336587    5712 command_runner.go:130] ! E0318 12:18:46.641953       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 12:44:39.336587    5712 command_runner.go:130] ! W0318 12:18:46.642236       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 12:44:39.336587    5712 command_runner.go:130] ! E0318 12:18:46.642375       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 12:44:39.336844    5712 command_runner.go:130] ! W0318 12:18:46.642673       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 12:44:39.336844    5712 command_runner.go:130] ! W0318 12:18:46.646073       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 12:44:39.336844    5712 command_runner.go:130] ! E0318 12:18:46.647270       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 12:44:39.336844    5712 command_runner.go:130] ! W0318 12:18:46.646147       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 12:44:39.336844    5712 command_runner.go:130] ! E0318 12:18:46.647534       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 12:44:39.336844    5712 command_runner.go:130] ! W0318 12:18:46.646208       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:39.336844    5712 command_runner.go:130] ! E0318 12:18:46.647719       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:39.336844    5712 command_runner.go:130] ! W0318 12:18:46.646271       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 12:44:39.336844    5712 command_runner.go:130] ! E0318 12:18:46.647738       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 12:44:39.336844    5712 command_runner.go:130] ! W0318 12:18:46.646322       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 12:44:39.336844    5712 command_runner.go:130] ! E0318 12:18:46.647752       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 12:44:39.336844    5712 command_runner.go:130] ! E0318 12:18:46.647915       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 12:44:39.336844    5712 command_runner.go:130] ! W0318 12:18:46.650301       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 12:44:39.336844    5712 command_runner.go:130] ! E0318 12:18:46.650528       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 12:44:39.336844    5712 command_runner.go:130] ! W0318 12:18:47.471960       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:39.336844    5712 command_runner.go:130] ! E0318 12:18:47.472093       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:39.336844    5712 command_runner.go:130] ! W0318 12:18:47.540921       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 12:44:39.336844    5712 command_runner.go:130] ! E0318 12:18:47.541368       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 12:44:39.337419    5712 command_runner.go:130] ! W0318 12:18:47.545171       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 12:44:39.337468    5712 command_runner.go:130] ! E0318 12:18:47.546126       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 12:44:39.337503    5712 command_runner.go:130] ! W0318 12:18:47.563772       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 12:44:39.337503    5712 command_runner.go:130] ! E0318 12:18:47.563806       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 12:44:39.337503    5712 command_runner.go:130] ! W0318 12:18:47.597770       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 12:44:39.337503    5712 command_runner.go:130] ! E0318 12:18:47.597873       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 12:44:39.337503    5712 command_runner.go:130] ! W0318 12:18:47.684794       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 12:44:39.337503    5712 command_runner.go:130] ! E0318 12:18:47.685008       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 12:44:39.337503    5712 command_runner.go:130] ! W0318 12:18:47.685352       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:39.337503    5712 command_runner.go:130] ! E0318 12:18:47.685509       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:39.337503    5712 command_runner.go:130] ! W0318 12:18:47.840132       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 12:44:39.337503    5712 command_runner.go:130] ! E0318 12:18:47.840303       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 12:44:39.337503    5712 command_runner.go:130] ! W0318 12:18:47.879838       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 12:44:39.337503    5712 command_runner.go:130] ! E0318 12:18:47.880363       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 12:44:39.337503    5712 command_runner.go:130] ! W0318 12:18:47.906171       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:39.337503    5712 command_runner.go:130] ! E0318 12:18:47.906493       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:39.337503    5712 command_runner.go:130] ! W0318 12:18:48.059997       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 12:44:39.337503    5712 command_runner.go:130] ! E0318 12:18:48.060049       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 12:44:39.337503    5712 command_runner.go:130] ! W0318 12:18:48.096160       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:39.337503    5712 command_runner.go:130] ! E0318 12:18:48.096304       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:39.338031    5712 command_runner.go:130] ! W0318 12:18:48.096504       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 12:44:39.338080    5712 command_runner.go:130] ! E0318 12:18:48.096662       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 12:44:39.338080    5712 command_runner.go:130] ! W0318 12:18:48.133175       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 12:44:39.338080    5712 command_runner.go:130] ! E0318 12:18:48.133469       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 12:44:39.338080    5712 command_runner.go:130] ! W0318 12:18:48.135066       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0318 12:44:39.338080    5712 command_runner.go:130] ! E0318 12:18:48.135196       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0318 12:44:39.338080    5712 command_runner.go:130] ! I0318 12:18:50.022459       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 12:44:39.338080    5712 command_runner.go:130] ! E0318 12:40:51.995231       1 run.go:74] "command failed" err="finished without leader elect"
	I0318 12:44:39.350356    5712 logs.go:123] Gathering logs for kube-proxy [575b41a3a85a] ...
	I0318 12:44:39.350356    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 575b41a3a85a"
	I0318 12:44:39.385101    5712 command_runner.go:130] ! I0318 12:43:33.336778       1 server_others.go:69] "Using iptables proxy"
	I0318 12:44:39.385156    5712 command_runner.go:130] ! I0318 12:43:33.550433       1 node.go:141] Successfully retrieved node IP: 172.25.148.129
	I0318 12:44:39.385156    5712 command_runner.go:130] ! I0318 12:43:33.793084       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 12:44:39.385156    5712 command_runner.go:130] ! I0318 12:43:33.793109       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 12:44:39.385217    5712 command_runner.go:130] ! I0318 12:43:33.796954       1 server_others.go:152] "Using iptables Proxier"
	I0318 12:44:39.385253    5712 command_runner.go:130] ! I0318 12:43:33.798936       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 12:44:39.385253    5712 command_runner.go:130] ! I0318 12:43:33.800347       1 server.go:846] "Version info" version="v1.28.4"
	I0318 12:44:39.385301    5712 command_runner.go:130] ! I0318 12:43:33.800569       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:39.385301    5712 command_runner.go:130] ! I0318 12:43:33.803648       1 config.go:188] "Starting service config controller"
	I0318 12:44:39.385301    5712 command_runner.go:130] ! I0318 12:43:33.805156       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 12:44:39.385301    5712 command_runner.go:130] ! I0318 12:43:33.805421       1 config.go:97] "Starting endpoint slice config controller"
	I0318 12:44:39.385372    5712 command_runner.go:130] ! I0318 12:43:33.805584       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 12:44:39.385372    5712 command_runner.go:130] ! I0318 12:43:33.808628       1 config.go:315] "Starting node config controller"
	I0318 12:44:39.385372    5712 command_runner.go:130] ! I0318 12:43:33.808736       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 12:44:39.385436    5712 command_runner.go:130] ! I0318 12:43:33.905580       1 shared_informer.go:318] Caches are synced for service config
	I0318 12:44:39.385436    5712 command_runner.go:130] ! I0318 12:43:33.907041       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 12:44:39.385436    5712 command_runner.go:130] ! I0318 12:43:33.909416       1 shared_informer.go:318] Caches are synced for node config
	I0318 12:44:39.386346    5712 logs.go:123] Gathering logs for kube-proxy [4bbad08fe59a] ...
	I0318 12:44:39.386895    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbad08fe59a"
	I0318 12:44:39.417696    5712 command_runner.go:130] ! I0318 12:19:04.970720       1 server_others.go:69] "Using iptables proxy"
	I0318 12:44:39.417696    5712 command_runner.go:130] ! I0318 12:19:04.997380       1 node.go:141] Successfully retrieved node IP: 172.25.151.112
	I0318 12:44:39.417696    5712 command_runner.go:130] ! I0318 12:19:05.099028       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 12:44:39.417696    5712 command_runner.go:130] ! I0318 12:19:05.099065       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 12:44:39.417696    5712 command_runner.go:130] ! I0318 12:19:05.102885       1 server_others.go:152] "Using iptables Proxier"
	I0318 12:44:39.417696    5712 command_runner.go:130] ! I0318 12:19:05.103013       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 12:44:39.417696    5712 command_runner.go:130] ! I0318 12:19:05.103652       1 server.go:846] "Version info" version="v1.28.4"
	I0318 12:44:39.417696    5712 command_runner.go:130] ! I0318 12:19:05.103704       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:39.417696    5712 command_runner.go:130] ! I0318 12:19:05.105505       1 config.go:188] "Starting service config controller"
	I0318 12:44:39.417696    5712 command_runner.go:130] ! I0318 12:19:05.106093       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 12:44:39.417696    5712 command_runner.go:130] ! I0318 12:19:05.106131       1 config.go:97] "Starting endpoint slice config controller"
	I0318 12:44:39.417696    5712 command_runner.go:130] ! I0318 12:19:05.106138       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 12:44:39.417696    5712 command_runner.go:130] ! I0318 12:19:05.107424       1 config.go:315] "Starting node config controller"
	I0318 12:44:39.417696    5712 command_runner.go:130] ! I0318 12:19:05.107456       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 12:44:39.417696    5712 command_runner.go:130] ! I0318 12:19:05.206699       1 shared_informer.go:318] Caches are synced for service config
	I0318 12:44:39.417696    5712 command_runner.go:130] ! I0318 12:19:05.206811       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 12:44:39.417696    5712 command_runner.go:130] ! I0318 12:19:05.207857       1 shared_informer.go:318] Caches are synced for node config
	I0318 12:44:39.420657    5712 logs.go:123] Gathering logs for kube-controller-manager [14ae9398d33b] ...
	I0318 12:44:39.420709    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ae9398d33b"
	I0318 12:44:39.452685    5712 command_runner.go:130] ! I0318 12:43:27.406049       1 serving.go:348] Generated self-signed cert in-memory
	I0318 12:44:39.452685    5712 command_runner.go:130] ! I0318 12:43:29.733819       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0318 12:44:39.452685    5712 command_runner.go:130] ! I0318 12:43:29.734137       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:39.452685    5712 command_runner.go:130] ! I0318 12:43:29.737351       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 12:44:39.452809    5712 command_runner.go:130] ! I0318 12:43:29.737598       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 12:44:39.452809    5712 command_runner.go:130] ! I0318 12:43:29.739365       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0318 12:44:39.452809    5712 command_runner.go:130] ! I0318 12:43:29.740428       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 12:44:39.452809    5712 command_runner.go:130] ! I0318 12:43:32.581261       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0318 12:44:39.453395    5712 command_runner.go:130] ! I0318 12:43:32.597867       1 controllermanager.go:642] "Started controller" controller="persistentvolume-binder-controller"
	I0318 12:44:39.453897    5712 command_runner.go:130] ! I0318 12:43:32.602078       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0318 12:44:39.453952    5712 command_runner.go:130] ! I0318 12:43:32.602099       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0318 12:44:39.454168    5712 command_runner.go:130] ! I0318 12:43:32.605600       1 controllermanager.go:642] "Started controller" controller="persistentvolume-expander-controller"
	I0318 12:44:39.454220    5712 command_runner.go:130] ! I0318 12:43:32.605807       1 expand_controller.go:328] "Starting expand controller"
	I0318 12:44:39.454220    5712 command_runner.go:130] ! I0318 12:43:32.605957       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0318 12:44:39.454220    5712 command_runner.go:130] ! I0318 12:43:32.620725       1 controllermanager.go:642] "Started controller" controller="persistentvolume-protection-controller"
	I0318 12:44:39.454263    5712 command_runner.go:130] ! I0318 12:43:32.621286       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0318 12:44:39.454318    5712 command_runner.go:130] ! I0318 12:43:32.621374       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0318 12:44:39.454318    5712 command_runner.go:130] ! I0318 12:43:32.663010       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I0318 12:44:39.454318    5712 command_runner.go:130] ! I0318 12:43:32.663383       1 namespace_controller.go:197] "Starting namespace controller"
	I0318 12:44:39.454353    5712 command_runner.go:130] ! I0318 12:43:32.663451       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0318 12:44:39.454398    5712 command_runner.go:130] ! I0318 12:43:32.674431       1 controllermanager.go:642] "Started controller" controller="deployment-controller"
	I0318 12:44:39.454398    5712 command_runner.go:130] ! I0318 12:43:32.675030       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0318 12:44:39.454398    5712 command_runner.go:130] ! I0318 12:43:32.675045       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0318 12:44:39.454398    5712 command_runner.go:130] ! I0318 12:43:32.680220       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0318 12:44:39.454497    5712 command_runner.go:130] ! I0318 12:43:32.680236       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0318 12:44:39.454568    5712 command_runner.go:130] ! I0318 12:43:32.680266       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:39.454568    5712 command_runner.go:130] ! I0318 12:43:32.681919       1 shared_informer.go:318] Caches are synced for tokens
	I0318 12:44:39.454625    5712 command_runner.go:130] ! I0318 12:43:32.684132       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0318 12:44:39.454650    5712 command_runner.go:130] ! I0318 12:43:32.684147       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.684164       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.685811       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.685845       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.686123       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.687526       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.687845       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.687858       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.687918       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.691958       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.692673       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.696192       1 controllermanager.go:642] "Started controller" controller="bootstrap-signer-controller"
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.696622       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.701031       1 controllermanager.go:642] "Started controller" controller="token-cleaner-controller"
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.701415       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.701449       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.701458       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0318 12:44:39.454679    5712 command_runner.go:130] ! E0318 12:43:32.705162       1 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.705349       1 controllermanager.go:620] "Warning: skipping controller" controller="service-lb-controller"
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.705364       1 core.go:228] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.705376       1 controllermanager.go:620] "Warning: skipping controller" controller="node-route-controller"
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.750736       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.751361       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0318 12:44:39.454679    5712 command_runner.go:130] ! W0318 12:43:32.751515       1 shared_informer.go:593] resyncPeriod 19h34m1.540802039s is smaller than resyncCheckPeriod 20h12m46.622656472s and the informer has already started. Changing it to 20h12m46.622656472s
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.752012       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0318 12:44:39.454679    5712 command_runner.go:130] ! I0318 12:43:32.752529       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0318 12:44:39.456045    5712 command_runner.go:130] ! I0318 12:43:32.752719       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0318 12:44:39.456127    5712 command_runner.go:130] ! I0318 12:43:32.752884       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0318 12:44:39.456361    5712 command_runner.go:130] ! I0318 12:43:32.753191       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0318 12:44:39.456623    5712 command_runner.go:130] ! I0318 12:43:32.753284       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0318 12:44:39.456684    5712 command_runner.go:130] ! I0318 12:43:32.753677       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0318 12:44:39.456684    5712 command_runner.go:130] ! I0318 12:43:32.753791       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0318 12:44:39.456727    5712 command_runner.go:130] ! I0318 12:43:32.753884       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0318 12:44:39.456727    5712 command_runner.go:130] ! I0318 12:43:32.754036       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0318 12:44:39.456727    5712 command_runner.go:130] ! I0318 12:43:32.754202       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0318 12:44:39.456727    5712 command_runner.go:130] ! I0318 12:43:32.754691       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0318 12:44:39.456727    5712 command_runner.go:130] ! I0318 12:43:32.755001       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0318 12:44:39.456727    5712 command_runner.go:130] ! I0318 12:43:32.755205       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0318 12:44:39.456727    5712 command_runner.go:130] ! I0318 12:43:32.755784       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0318 12:44:39.456727    5712 command_runner.go:130] ! I0318 12:43:32.755974       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0318 12:44:39.456863    5712 command_runner.go:130] ! I0318 12:43:32.756144       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0318 12:44:39.456912    5712 command_runner.go:130] ! I0318 12:43:32.756649       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0318 12:44:39.456912    5712 command_runner.go:130] ! I0318 12:43:32.756826       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0318 12:44:39.456947    5712 command_runner.go:130] ! I0318 12:43:32.757119       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 12:44:39.456947    5712 command_runner.go:130] ! I0318 12:43:32.757364       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0318 12:44:39.456980    5712 command_runner.go:130] ! I0318 12:43:32.757580       1 controllermanager.go:642] "Started controller" controller="resourcequota-controller"
	I0318 12:44:39.457001    5712 command_runner.go:130] ! E0318 12:43:32.773718       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0318 12:44:39.457001    5712 command_runner.go:130] ! I0318 12:43:32.773746       1 controllermanager.go:620] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0318 12:44:39.457035    5712 command_runner.go:130] ! I0318 12:43:32.786590       1 controllermanager.go:642] "Started controller" controller="ephemeral-volume-controller"
	I0318 12:44:39.457035    5712 command_runner.go:130] ! I0318 12:43:32.786978       1 controller.go:169] "Starting ephemeral volume controller"
	I0318 12:44:39.457067    5712 command_runner.go:130] ! I0318 12:43:32.787007       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0318 12:44:39.457067    5712 command_runner.go:130] ! I0318 12:43:32.795770       1 controllermanager.go:642] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0318 12:44:39.457094    5712 command_runner.go:130] ! I0318 12:43:32.798452       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0318 12:44:39.457094    5712 command_runner.go:130] ! I0318 12:43:32.798585       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0318 12:44:39.457094    5712 command_runner.go:130] ! I0318 12:43:32.801712       1 controllermanager.go:642] "Started controller" controller="serviceaccount-controller"
	I0318 12:44:39.457094    5712 command_runner.go:130] ! I0318 12:43:32.802261       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0318 12:44:39.457094    5712 command_runner.go:130] ! I0318 12:43:32.806063       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0318 12:44:39.457094    5712 command_runner.go:130] ! I0318 12:43:32.823560       1 controllermanager.go:642] "Started controller" controller="garbage-collector-controller"
	I0318 12:44:39.457094    5712 command_runner.go:130] ! I0318 12:43:32.823578       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0318 12:44:39.457094    5712 command_runner.go:130] ! I0318 12:43:32.823595       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 12:44:39.457094    5712 command_runner.go:130] ! I0318 12:43:32.823621       1 graph_builder.go:294] "Running" component="GraphBuilder"
	I0318 12:44:39.457094    5712 command_runner.go:130] ! I0318 12:43:32.833033       1 controllermanager.go:642] "Started controller" controller="daemonset-controller"
	I0318 12:44:39.457094    5712 command_runner.go:130] ! I0318 12:43:32.833480       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0318 12:44:39.457094    5712 command_runner.go:130] ! I0318 12:43:32.833494       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0318 12:44:39.457094    5712 command_runner.go:130] ! I0318 12:43:32.862160       1 node_lifecycle_controller.go:431] "Controller will reconcile labels"
	I0318 12:44:39.457094    5712 command_runner.go:130] ! I0318 12:43:32.862209       1 controllermanager.go:642] "Started controller" controller="node-lifecycle-controller"
	I0318 12:44:39.457094    5712 command_runner.go:130] ! I0318 12:43:32.862524       1 node_lifecycle_controller.go:465] "Sending events to api server"
	I0318 12:44:39.457703    5712 command_runner.go:130] ! I0318 12:43:32.862562       1 node_lifecycle_controller.go:476] "Starting node controller"
	I0318 12:44:39.457703    5712 command_runner.go:130] ! I0318 12:43:32.862573       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0318 12:44:39.457703    5712 command_runner.go:130] ! I0318 12:43:32.883369       1 controllermanager.go:642] "Started controller" controller="endpointslice-mirroring-controller"
	I0318 12:44:39.457703    5712 command_runner.go:130] ! I0318 12:43:32.886141       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0318 12:44:39.457764    5712 command_runner.go:130] ! I0318 12:43:32.886674       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0318 12:44:39.457764    5712 command_runner.go:130] ! I0318 12:43:32.896468       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I0318 12:44:39.457764    5712 command_runner.go:130] ! I0318 12:43:32.896951       1 stateful_set.go:161] "Starting stateful set controller"
	I0318 12:44:39.457764    5712 command_runner.go:130] ! I0318 12:43:32.897135       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0318 12:44:39.457816    5712 command_runner.go:130] ! I0318 12:43:32.900325       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0318 12:44:39.457816    5712 command_runner.go:130] ! I0318 12:43:32.900580       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0318 12:44:39.457816    5712 command_runner.go:130] ! I0318 12:43:32.903531       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0318 12:44:39.457816    5712 command_runner.go:130] ! I0318 12:43:32.917793       1 controllermanager.go:642] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0318 12:44:39.457816    5712 command_runner.go:130] ! I0318 12:43:32.918152       1 horizontal.go:200] "Starting HPA controller"
	I0318 12:44:39.457816    5712 command_runner.go:130] ! I0318 12:43:32.918638       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0318 12:44:39.457816    5712 command_runner.go:130] ! I0318 12:43:32.920489       1 controllermanager.go:642] "Started controller" controller="pod-garbage-collector-controller"
	I0318 12:44:39.457816    5712 command_runner.go:130] ! I0318 12:43:32.920802       1 gc_controller.go:101] "Starting GC controller"
	I0318 12:44:39.457944    5712 command_runner.go:130] ! I0318 12:43:32.922940       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0318 12:44:39.457944    5712 command_runner.go:130] ! I0318 12:43:32.923834       1 controllermanager.go:642] "Started controller" controller="endpointslice-controller"
	I0318 12:44:39.457944    5712 command_runner.go:130] ! I0318 12:43:32.924143       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0318 12:44:39.457944    5712 command_runner.go:130] ! I0318 12:43:32.924461       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0318 12:44:39.457944    5712 command_runner.go:130] ! I0318 12:43:32.935394       1 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
	I0318 12:44:39.457944    5712 command_runner.go:130] ! I0318 12:43:32.935610       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0318 12:44:39.457944    5712 command_runner.go:130] ! I0318 12:43:32.935623       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0318 12:44:39.457944    5712 command_runner.go:130] ! I0318 12:43:32.996434       1 controllermanager.go:642] "Started controller" controller="job-controller"
	I0318 12:44:39.457944    5712 command_runner.go:130] ! I0318 12:43:32.996586       1 job_controller.go:226] "Starting job controller"
	I0318 12:44:39.457944    5712 command_runner.go:130] ! I0318 12:43:32.996666       1 shared_informer.go:311] Waiting for caches to sync for job
	I0318 12:44:39.458093    5712 command_runner.go:130] ! I0318 12:43:33.085354       1 controllermanager.go:642] "Started controller" controller="disruption-controller"
	I0318 12:44:39.458093    5712 command_runner.go:130] ! I0318 12:43:33.086157       1 disruption.go:433] "Sending events to api server."
	I0318 12:44:39.458093    5712 command_runner.go:130] ! I0318 12:43:33.086235       1 disruption.go:444] "Starting disruption controller"
	I0318 12:44:39.458093    5712 command_runner.go:130] ! I0318 12:43:33.086245       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0318 12:44:39.458093    5712 command_runner.go:130] ! I0318 12:43:33.141477       1 controllermanager.go:642] "Started controller" controller="cronjob-controller"
	I0318 12:44:39.458162    5712 command_runner.go:130] ! I0318 12:43:33.142359       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0318 12:44:39.458162    5712 command_runner.go:130] ! I0318 12:43:33.142566       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0318 12:44:39.458229    5712 command_runner.go:130] ! I0318 12:43:33.186973       1 controllermanager.go:642] "Started controller" controller="endpoints-controller"
	I0318 12:44:39.458229    5712 command_runner.go:130] ! I0318 12:43:33.187335       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0318 12:44:39.458255    5712 command_runner.go:130] ! I0318 12:43:33.187410       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0318 12:44:39.458255    5712 command_runner.go:130] ! I0318 12:43:33.236517       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0318 12:44:39.458301    5712 command_runner.go:130] ! I0318 12:43:33.236982       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0318 12:44:39.458301    5712 command_runner.go:130] ! I0318 12:43:33.237471       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0318 12:44:39.458329    5712 command_runner.go:130] ! I0318 12:43:33.286539       1 controllermanager.go:642] "Started controller" controller="ttl-controller"
	I0318 12:44:39.458329    5712 command_runner.go:130] ! I0318 12:43:33.287154       1 ttl_controller.go:124] "Starting TTL controller"
	I0318 12:44:39.458389    5712 command_runner.go:130] ! I0318 12:43:33.287375       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0318 12:44:39.458413    5712 command_runner.go:130] ! I0318 12:43:43.355688       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.355845       1 controllermanager.go:642] "Started controller" controller="node-ipam-controller"
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.356879       1 node_ipam_controller.go:162] "Starting ipam controller"
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.357033       1 shared_informer.go:311] Waiting for caches to sync for node
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.359716       1 controllermanager.go:642] "Started controller" controller="clusterrole-aggregation-controller"
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.361043       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.361062       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.364706       1 controllermanager.go:642] "Started controller" controller="ttl-after-finished-controller"
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.364861       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.364989       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.369492       1 controllermanager.go:642] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.369675       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.369706       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.375944       1 controllermanager.go:642] "Started controller" controller="replicaset-controller"
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.376145       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.377600       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.390058       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.405940       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600\" does not exist"
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.408115       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.408433       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.408623       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600-m02\" does not exist"
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.408708       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600-m03\" does not exist"
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.408817       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.421506       1 shared_informer.go:318] Caches are synced for PV protection
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.446678       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.459596       1 shared_informer.go:318] Caches are synced for node
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.459833       1 range_allocator.go:174] "Sending events to api server"
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.460258       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.460829       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.461091       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.461418       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.463618       1 shared_informer.go:318] Caches are synced for namespace
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.466097       1 shared_informer.go:318] Caches are synced for taint
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.466427       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.466639       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.466863       1 taint_manager.go:210] "Sending events to api server"
	I0318 12:44:39.458443    5712 command_runner.go:130] ! I0318 12:43:43.468821       1 event.go:307] "Event occurred" object="multinode-642600" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600 event: Registered Node multinode-642600 in Controller"
	I0318 12:44:39.458981    5712 command_runner.go:130] ! I0318 12:43:43.469328       1 event.go:307] "Event occurred" object="multinode-642600-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600-m02 event: Registered Node multinode-642600-m02 in Controller"
	I0318 12:44:39.459028    5712 command_runner.go:130] ! I0318 12:43:43.469579       1 event.go:307] "Event occurred" object="multinode-642600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600-m03 event: Registered Node multinode-642600-m03 in Controller"
	I0318 12:44:39.459028    5712 command_runner.go:130] ! I0318 12:43:43.469959       1 shared_informer.go:318] Caches are synced for crt configmap
	I0318 12:44:39.459028    5712 command_runner.go:130] ! I0318 12:43:43.477268       1 shared_informer.go:318] Caches are synced for deployment
	I0318 12:44:39.459028    5712 command_runner.go:130] ! I0318 12:43:43.486297       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0318 12:44:39.459028    5712 command_runner.go:130] ! I0318 12:43:43.487082       1 shared_informer.go:318] Caches are synced for ephemeral
	I0318 12:44:39.459028    5712 command_runner.go:130] ! I0318 12:43:43.487171       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0318 12:44:39.459028    5712 command_runner.go:130] ! I0318 12:43:43.487768       1 shared_informer.go:318] Caches are synced for TTL
	I0318 12:44:39.459028    5712 command_runner.go:130] ! I0318 12:43:43.487848       1 shared_informer.go:318] Caches are synced for endpoint
	I0318 12:44:39.459028    5712 command_runner.go:130] ! I0318 12:43:43.489265       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0318 12:44:39.459028    5712 command_runner.go:130] ! I0318 12:43:43.497682       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0318 12:44:39.459028    5712 command_runner.go:130] ! I0318 12:43:43.498610       1 shared_informer.go:318] Caches are synced for stateful set
	I0318 12:44:39.459190    5712 command_runner.go:130] ! I0318 12:43:43.498725       1 shared_informer.go:318] Caches are synced for attach detach
	I0318 12:44:39.459190    5712 command_runner.go:130] ! I0318 12:43:43.501123       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-642600"
	I0318 12:44:39.459190    5712 command_runner.go:130] ! I0318 12:43:43.503362       1 shared_informer.go:318] Caches are synced for persistent volume
	I0318 12:44:39.459190    5712 command_runner.go:130] ! I0318 12:43:43.505991       1 shared_informer.go:318] Caches are synced for expand
	I0318 12:44:39.459241    5712 command_runner.go:130] ! I0318 12:43:43.503938       1 shared_informer.go:318] Caches are synced for PVC protection
	I0318 12:44:39.459241    5712 command_runner.go:130] ! I0318 12:43:43.506104       1 shared_informer.go:318] Caches are synced for service account
	I0318 12:44:39.459241    5712 command_runner.go:130] ! I0318 12:43:43.505782       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-642600-m02"
	I0318 12:44:39.459241    5712 command_runner.go:130] ! I0318 12:43:43.505818       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-642600-m03"
	I0318 12:44:39.459293    5712 command_runner.go:130] ! I0318 12:43:43.506356       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0318 12:44:39.459293    5712 command_runner.go:130] ! I0318 12:43:43.521010       1 shared_informer.go:318] Caches are synced for HPA
	I0318 12:44:39.459293    5712 command_runner.go:130] ! I0318 12:43:43.524230       1 shared_informer.go:318] Caches are synced for GC
	I0318 12:44:39.459334    5712 command_runner.go:130] ! I0318 12:43:43.527081       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0318 12:44:39.459334    5712 command_runner.go:130] ! I0318 12:43:43.534422       1 shared_informer.go:318] Caches are synced for daemon sets
	I0318 12:44:39.459334    5712 command_runner.go:130] ! I0318 12:43:43.537721       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0318 12:44:39.459334    5712 command_runner.go:130] ! I0318 12:43:43.545260       1 shared_informer.go:318] Caches are synced for cronjob
	I0318 12:44:39.459382    5712 command_runner.go:130] ! I0318 12:43:43.546769       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="57.454588ms"
	I0318 12:44:39.459382    5712 command_runner.go:130] ! I0318 12:43:43.547853       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="57.476888ms"
	I0318 12:44:39.459382    5712 command_runner.go:130] ! I0318 12:43:43.552128       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="66µs"
	I0318 12:44:39.459382    5712 command_runner.go:130] ! I0318 12:43:43.552429       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="130.199µs"
	I0318 12:44:39.459442    5712 command_runner.go:130] ! I0318 12:43:43.565701       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0318 12:44:39.459442    5712 command_runner.go:130] ! I0318 12:43:43.580927       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0318 12:44:39.459442    5712 command_runner.go:130] ! I0318 12:43:43.585098       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0318 12:44:39.459481    5712 command_runner.go:130] ! I0318 12:43:43.586663       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0318 12:44:39.459481    5712 command_runner.go:130] ! I0318 12:43:43.590461       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 12:44:39.459933    5712 command_runner.go:130] ! I0318 12:43:43.597830       1 shared_informer.go:318] Caches are synced for job
	I0318 12:44:39.459973    5712 command_runner.go:130] ! I0318 12:43:43.635734       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0318 12:44:39.459973    5712 command_runner.go:130] ! I0318 12:43:43.658493       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 12:44:39.459973    5712 command_runner.go:130] ! I0318 12:43:43.686534       1 shared_informer.go:318] Caches are synced for disruption
	I0318 12:44:39.459973    5712 command_runner.go:130] ! I0318 12:43:44.024395       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 12:44:39.459973    5712 command_runner.go:130] ! I0318 12:43:44.024760       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0318 12:44:39.459973    5712 command_runner.go:130] ! I0318 12:43:44.048280       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 12:44:39.459973    5712 command_runner.go:130] ! I0318 12:44:11.303411       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:39.459973    5712 command_runner.go:130] ! I0318 12:44:13.533509       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-48qkw" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-48qkw"
	I0318 12:44:39.459973    5712 command_runner.go:130] ! I0318 12:44:13.534203       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68-fgn7v" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-5dd5756b68-fgn7v"
	I0318 12:44:39.459973    5712 command_runner.go:130] ! I0318 12:44:13.534478       1 event.go:307] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I0318 12:44:39.459973    5712 command_runner.go:130] ! I0318 12:44:23.562573       1 event.go:307] "Event occurred" object="multinode-642600-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-642600-m02 status is now: NodeNotReady"
	I0318 12:44:39.459973    5712 command_runner.go:130] ! I0318 12:44:23.591486       1 event.go:307] "Event occurred" object="kube-system/kindnet-d5llj" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:39.459973    5712 command_runner.go:130] ! I0318 12:44:23.614671       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-vts9f" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:39.459973    5712 command_runner.go:130] ! I0318 12:44:23.639496       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-hmhdf" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:39.459973    5712 command_runner.go:130] ! I0318 12:44:23.661949       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="21.740356ms"
	I0318 12:44:39.459973    5712 command_runner.go:130] ! I0318 12:44:23.663289       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="50.499µs"
	I0318 12:44:39.459973    5712 command_runner.go:130] ! I0318 12:44:37.149797       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="76.1µs"
	I0318 12:44:39.459973    5712 command_runner.go:130] ! I0318 12:44:37.209300       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="28.125704ms"
	I0318 12:44:39.459973    5712 command_runner.go:130] ! I0318 12:44:37.209415       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="54.4µs"
	I0318 12:44:39.459973    5712 command_runner.go:130] ! I0318 12:44:37.245284       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="23.227968ms"
	I0318 12:44:39.459973    5712 command_runner.go:130] ! I0318 12:44:37.254358       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="3.872028ms"
	I0318 12:44:39.474545    5712 logs.go:123] Gathering logs for kindnet [9fec05a61d2a] ...
	I0318 12:44:39.474545    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fec05a61d2a"
	I0318 12:44:39.508838    5712 command_runner.go:130] ! I0318 12:43:33.429181       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0318 12:44:39.508838    5712 command_runner.go:130] ! I0318 12:43:33.431032       1 main.go:107] hostIP = 172.25.148.129
	I0318 12:44:39.508838    5712 command_runner.go:130] ! podIP = 172.25.148.129
	I0318 12:44:39.508838    5712 command_runner.go:130] ! I0318 12:43:33.432708       1 main.go:116] setting mtu 1500 for CNI 
	I0318 12:44:39.509813    5712 command_runner.go:130] ! I0318 12:43:33.432750       1 main.go:146] kindnetd IP family: "ipv4"
	I0318 12:44:39.509813    5712 command_runner.go:130] ! I0318 12:43:33.432773       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0318 12:44:39.509887    5712 command_runner.go:130] ! I0318 12:44:03.855331       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0318 12:44:39.509887    5712 command_runner.go:130] ! I0318 12:44:03.906638       1 main.go:223] Handling node with IPs: map[172.25.148.129:{}]
	I0318 12:44:39.509927    5712 command_runner.go:130] ! I0318 12:44:03.906763       1 main.go:227] handling current node
	I0318 12:44:39.509927    5712 command_runner.go:130] ! I0318 12:44:03.907280       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.509986    5712 command_runner.go:130] ! I0318 12:44:03.907371       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.509986    5712 command_runner.go:130] ! I0318 12:44:03.907763       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.25.159.102 Flags: [] Table: 0} 
	I0318 12:44:39.510024    5712 command_runner.go:130] ! I0318 12:44:03.907983       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:39.510024    5712 command_runner.go:130] ! I0318 12:44:03.907999       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:39.510024    5712 command_runner.go:130] ! I0318 12:44:03.908063       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.25.157.200 Flags: [] Table: 0} 
	I0318 12:44:39.510085    5712 command_runner.go:130] ! I0318 12:44:13.926166       1 main.go:223] Handling node with IPs: map[172.25.148.129:{}]
	I0318 12:44:39.510111    5712 command_runner.go:130] ! I0318 12:44:13.926260       1 main.go:227] handling current node
	I0318 12:44:39.510111    5712 command_runner.go:130] ! I0318 12:44:13.926281       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.510111    5712 command_runner.go:130] ! I0318 12:44:13.926377       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.510111    5712 command_runner.go:130] ! I0318 12:44:13.927231       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:39.510111    5712 command_runner.go:130] ! I0318 12:44:13.927364       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:39.510111    5712 command_runner.go:130] ! I0318 12:44:23.943396       1 main.go:223] Handling node with IPs: map[172.25.148.129:{}]
	I0318 12:44:39.510111    5712 command_runner.go:130] ! I0318 12:44:23.943437       1 main.go:227] handling current node
	I0318 12:44:39.510111    5712 command_runner.go:130] ! I0318 12:44:23.943450       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.510111    5712 command_runner.go:130] ! I0318 12:44:23.943456       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.510111    5712 command_runner.go:130] ! I0318 12:44:23.943816       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:39.510111    5712 command_runner.go:130] ! I0318 12:44:23.943956       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:39.510111    5712 command_runner.go:130] ! I0318 12:44:33.951114       1 main.go:223] Handling node with IPs: map[172.25.148.129:{}]
	I0318 12:44:39.510111    5712 command_runner.go:130] ! I0318 12:44:33.951215       1 main.go:227] handling current node
	I0318 12:44:39.510111    5712 command_runner.go:130] ! I0318 12:44:33.951232       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:39.510111    5712 command_runner.go:130] ! I0318 12:44:33.951241       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:39.510111    5712 command_runner.go:130] ! I0318 12:44:33.951807       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:39.510111    5712 command_runner.go:130] ! I0318 12:44:33.951927       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:39.513198    5712 logs.go:123] Gathering logs for Docker ...
	I0318 12:44:39.513198    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 12:44:39.554235    5712 command_runner.go:130] > Mar 18 12:41:52 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 12:44:39.554235    5712 command_runner.go:130] > Mar 18 12:41:52 minikube cri-dockerd[219]: time="2024-03-18T12:41:52Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 12:44:39.554314    5712 command_runner.go:130] > Mar 18 12:41:52 minikube cri-dockerd[219]: time="2024-03-18T12:41:52Z" level=info msg="Start docker client with request timeout 0s"
	I0318 12:44:39.554350    5712 command_runner.go:130] > Mar 18 12:41:52 minikube cri-dockerd[219]: time="2024-03-18T12:41:52Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0318 12:44:39.554350    5712 command_runner.go:130] > Mar 18 12:41:52 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0318 12:44:39.554350    5712 command_runner.go:130] > Mar 18 12:41:52 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 12:44:39.554420    5712 command_runner.go:130] > Mar 18 12:41:52 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 12:44:39.554420    5712 command_runner.go:130] > Mar 18 12:41:55 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0318 12:44:39.554420    5712 command_runner.go:130] > Mar 18 12:41:55 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0318 12:44:39.554420    5712 command_runner.go:130] > Mar 18 12:41:55 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 12:44:39.554420    5712 command_runner.go:130] > Mar 18 12:41:55 minikube cri-dockerd[404]: time="2024-03-18T12:41:55Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 12:44:39.554420    5712 command_runner.go:130] > Mar 18 12:41:55 minikube cri-dockerd[404]: time="2024-03-18T12:41:55Z" level=info msg="Start docker client with request timeout 0s"
	I0318 12:44:39.554420    5712 command_runner.go:130] > Mar 18 12:41:55 minikube cri-dockerd[404]: time="2024-03-18T12:41:55Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0318 12:44:39.554420    5712 command_runner.go:130] > Mar 18 12:41:55 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0318 12:44:39.554527    5712 command_runner.go:130] > Mar 18 12:41:55 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 12:44:39.554527    5712 command_runner.go:130] > Mar 18 12:41:55 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 12:44:39.554527    5712 command_runner.go:130] > Mar 18 12:41:57 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0318 12:44:39.554527    5712 command_runner.go:130] > Mar 18 12:41:57 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0318 12:44:39.554527    5712 command_runner.go:130] > Mar 18 12:41:57 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 12:44:39.554623    5712 command_runner.go:130] > Mar 18 12:41:57 minikube cri-dockerd[424]: time="2024-03-18T12:41:57Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 12:44:39.554623    5712 command_runner.go:130] > Mar 18 12:41:57 minikube cri-dockerd[424]: time="2024-03-18T12:41:57Z" level=info msg="Start docker client with request timeout 0s"
	I0318 12:44:39.554623    5712 command_runner.go:130] > Mar 18 12:41:57 minikube cri-dockerd[424]: time="2024-03-18T12:41:57Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0318 12:44:39.554623    5712 command_runner.go:130] > Mar 18 12:41:57 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0318 12:44:39.554623    5712 command_runner.go:130] > Mar 18 12:41:57 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 12:44:39.554623    5712 command_runner.go:130] > Mar 18 12:41:57 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 12:44:39.554623    5712 command_runner.go:130] > Mar 18 12:41:59 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0318 12:44:39.554729    5712 command_runner.go:130] > Mar 18 12:41:59 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0318 12:44:39.554729    5712 command_runner.go:130] > Mar 18 12:41:59 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0318 12:44:39.554729    5712 command_runner.go:130] > Mar 18 12:41:59 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 12:44:39.554729    5712 command_runner.go:130] > Mar 18 12:41:59 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 12:44:39.554729    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 systemd[1]: Starting Docker Application Container Engine...
	I0318 12:44:39.554823    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[652]: time="2024-03-18T12:42:46.799415676Z" level=info msg="Starting up"
	I0318 12:44:39.554823    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[652]: time="2024-03-18T12:42:46.800442474Z" level=info msg="containerd not running, starting managed containerd"
	I0318 12:44:39.554823    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[652]: time="2024-03-18T12:42:46.801655972Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=658
	I0318 12:44:39.554895    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.836542309Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0318 12:44:39.554924    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.866837154Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0318 12:44:39.554924    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.866991653Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0318 12:44:39.554924    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.867166153Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0318 12:44:39.554989    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.867346253Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:39.555015    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.868353051Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:39.555039    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.868455451Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:39.555097    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.868755450Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:39.555097    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.868785850Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:39.555148    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.868803850Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0318 12:44:39.555186    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.868815950Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:39.555186    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.869407649Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:39.555186    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.870171948Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:39.555265    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.873462742Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:39.555298    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.873569242Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:39.555298    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.873718241Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:39.555298    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.873818241Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0318 12:44:39.555382    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.874315040Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0318 12:44:39.555382    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.874434440Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0318 12:44:39.555452    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.874453940Z" level=info msg="metadata content store policy set" policy=shared
	I0318 12:44:39.555452    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880096930Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0318 12:44:39.555502    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880252829Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880377329Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880397729Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880414329Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880488329Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880819128Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880926428Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881236528Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881376427Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881400527Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881426127Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881441527Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881474927Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881491327Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881506427Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881521027Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881536227Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881566927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881586627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881601327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881617327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881631227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881646527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881659427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881673727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881757827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.555533    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881783527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.556075    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881798027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.556075    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881812927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.556075    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881826827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.556075    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881844827Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0318 12:44:39.556075    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881868126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.556075    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881889326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.556186    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881902926Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0318 12:44:39.556224    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882002626Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0318 12:44:39.556224    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882117726Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0318 12:44:39.556224    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882162226Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0318 12:44:39.556300    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882178726Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0318 12:44:39.556336    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882242626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882337926Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882358926Z" level=info msg="NRI interface is disabled by configuration."
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882603625Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882759725Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.883033524Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.883153424Z" level=info msg="containerd successfully booted in 0.049971s"
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:42:47 multinode-642600 dockerd[652]: time="2024-03-18T12:42:47.858472851Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 dockerd[652]: time="2024-03-18T12:42:48.057442718Z" level=info msg="Loading containers: start."
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 dockerd[652]: time="2024-03-18T12:42:48.544395210Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 dockerd[652]: time="2024-03-18T12:42:48.632442528Z" level=info msg="Loading containers: done."
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 dockerd[652]: time="2024-03-18T12:42:48.662805631Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 dockerd[652]: time="2024-03-18T12:42:48.663682128Z" level=info msg="Daemon has completed initialization"
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 dockerd[652]: time="2024-03-18T12:42:48.725498031Z" level=info msg="API listen on [::]:2376"
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 systemd[1]: Started Docker Application Container Engine.
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 dockerd[652]: time="2024-03-18T12:42:48.725911430Z" level=info msg="API listen on /var/run/docker.sock"
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:43:15 multinode-642600 systemd[1]: Stopping Docker Application Container Engine...
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:43:15 multinode-642600 dockerd[652]: time="2024-03-18T12:43:15.631434936Z" level=info msg="Processing signal 'terminated'"
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:43:15 multinode-642600 dockerd[652]: time="2024-03-18T12:43:15.633587433Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:43:15 multinode-642600 dockerd[652]: time="2024-03-18T12:43:15.634258932Z" level=info msg="Daemon shutdown complete"
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:43:15 multinode-642600 dockerd[652]: time="2024-03-18T12:43:15.634450831Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:43:15 multinode-642600 dockerd[652]: time="2024-03-18T12:43:15.634476831Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 systemd[1]: docker.service: Deactivated successfully.
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 systemd[1]: Stopped Docker Application Container Engine.
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 systemd[1]: Starting Docker Application Container Engine...
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:16.717087499Z" level=info msg="Starting up"
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:16.718262797Z" level=info msg="containerd not running, starting managed containerd"
	I0318 12:44:39.556365    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:16.719705495Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1048
	I0318 12:44:39.556954    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.754738639Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0318 12:44:39.556954    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784193992Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0318 12:44:39.556954    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784236292Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0318 12:44:39.556954    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784275292Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0318 12:44:39.556954    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784291492Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:39.556954    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784317492Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:39.557080    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784331992Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:39.557080    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784550091Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:39.557176    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784651691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:39.557176    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784673391Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0318 12:44:39.557176    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784704091Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:39.557176    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784764391Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:39.557176    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784996290Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:39.557267    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.787641686Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:39.557267    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.787744286Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:39.557267    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.787950186Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:39.557267    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.788044886Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0318 12:44:39.557370    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.788091986Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0318 12:44:39.557370    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.788127185Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0318 12:44:39.557370    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.788138585Z" level=info msg="metadata content store policy set" policy=shared
	I0318 12:44:39.557370    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.789136284Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0318 12:44:39.557472    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.789269784Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0318 12:44:39.557472    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.789298984Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0318 12:44:39.557472    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.789320484Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0318 12:44:39.557472    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.789342084Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0318 12:44:39.557559    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.789644383Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0318 12:44:39.557559    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.790600382Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0318 12:44:39.557559    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.791760980Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0318 12:44:39.557559    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.791832280Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0318 12:44:39.557645    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.791851580Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0318 12:44:39.557645    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.791866579Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0318 12:44:39.557645    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.791880279Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0318 12:44:39.557645    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.791969479Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0318 12:44:39.557728    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.791989879Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0318 12:44:39.557728    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792004479Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0318 12:44:39.557728    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792018079Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0318 12:44:39.557728    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792030379Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0318 12:44:39.557829    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792042479Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0318 12:44:39.557829    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792063279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.557829    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792077879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.557829    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792090579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.557908    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792103979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.557908    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792117779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.557908    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792135679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.557908    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792148379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.557994    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792161279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.557994    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792174179Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.557994    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792188479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.557994    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792199579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.558074    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792211479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.558074    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792223379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.558074    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792238079Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0318 12:44:39.558074    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792261579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.558074    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792276079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.558165    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792287879Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0318 12:44:39.558165    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792337479Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0318 12:44:39.558165    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792356479Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0318 12:44:39.558252    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792368079Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0318 12:44:39.558252    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792380379Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0318 12:44:39.558252    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792530178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0318 12:44:39.558338    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792576778Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0318 12:44:39.558338    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792591078Z" level=info msg="NRI interface is disabled by configuration."
	I0318 12:44:39.558338    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792811378Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0318 12:44:39.558457    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792927678Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0318 12:44:39.558483    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.793108678Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0318 12:44:39.558483    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.793160477Z" level=info msg="containerd successfully booted in 0.039931s"
	I0318 12:44:39.558483    5712 command_runner.go:130] > Mar 18 12:43:17 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:17.767243919Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0318 12:44:39.558483    5712 command_runner.go:130] > Mar 18 12:43:17 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:17.800090666Z" level=info msg="Loading containers: start."
	I0318 12:44:39.558483    5712 command_runner.go:130] > Mar 18 12:43:18 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:18.103803081Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0318 12:44:39.558483    5712 command_runner.go:130] > Mar 18 12:43:18 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:18.187726546Z" level=info msg="Loading containers: done."
	I0318 12:44:39.558483    5712 command_runner.go:130] > Mar 18 12:43:18 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:18.216487100Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	I0318 12:44:39.558483    5712 command_runner.go:130] > Mar 18 12:43:18 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:18.216648600Z" level=info msg="Daemon has completed initialization"
	I0318 12:44:39.558483    5712 command_runner.go:130] > Mar 18 12:43:18 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:18.271691012Z" level=info msg="API listen on /var/run/docker.sock"
	I0318 12:44:39.558483    5712 command_runner.go:130] > Mar 18 12:43:18 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:18.271966711Z" level=info msg="API listen on [::]:2376"
	I0318 12:44:39.558483    5712 command_runner.go:130] > Mar 18 12:43:18 multinode-642600 systemd[1]: Started Docker Application Container Engine.
	I0318 12:44:39.558483    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 12:44:39.558483    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 12:44:39.558483    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Start docker client with request timeout 0s"
	I0318 12:44:39.558483    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0318 12:44:39.558483    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Loaded network plugin cni"
	I0318 12:44:39.558483    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0318 12:44:39.559108    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Docker Info: &{ID:aa9100d3-1595-41ce-b36f-06932aef3ecb Containers:18 ContainersRunning:0 ContainersPaused:0 ContainersStopped:18 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:26 OomKillDisable:false NGoroutines:53 SystemTime:2024-03-18T12:43:19.415553382Z LoggingDriver:json-file CgroupDriver:cgroupfs CgroupVersion:2 NEventsListener:0 Ke
rnelVersion:5.10.207 OperatingSystem:Buildroot 2023.02.9 OSVersion:2023.02.9 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0002da070 NCPU:2 MemTotal:2216210432 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:multinode-642600 Labels:[provider=hyperv] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dcf2847247e18caba8dce86522029642f60fe96b Expected:dcf2847247e18caba8dce86522029642f60fe96b} RuncCommit:{ID:51d5e94601ceffbbd85688df1c928ecccbfa4685 Expected:51d5e94601ceffbbd85688df1c928ecccbfa4685} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[nam
e=seccomp,profile=builtin name=cgroupns] ProductLicense:Community Engine DefaultAddressPools:[] Warnings:[]}"
	I0318 12:44:39.559108    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0318 12:44:39.559108    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0318 12:44:39.559217    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0318 12:44:39.559246    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Start cri-dockerd grpc backend"
	I0318 12:44:39.559246    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0318 12:44:39.559246    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:24Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-5dd5756b68-fgn7v_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"ed38da653fbefea9aeb0ebdb91f985394a7a792571704a4875018f5a6bc9abda\""
	I0318 12:44:39.559246    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:24Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-5b5d89c9d6-48qkw_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"29bb4d534c2e2b00dfe907d4443637851e3c3348e31bf00939cd6efad71c4e2e\""
	I0318 12:44:39.559246    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.316277241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:39.559373    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.317878239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:39.559373    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.318571937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.559373    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.319101537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.559373    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.356638277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:39.559373    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.356750476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:39.559488    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.356767376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.559488    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.357118676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.559488    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.418245378Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:39.559565    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.421018274Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:39.559565    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.421217073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.559565    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.422102972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.559647    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.428274662Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:39.559647    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.428365762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:39.559647    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.428455862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.559647    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.428580261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.559773    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/67004ee038ee4247f6f751987304426067a63cee8c1636408dd16efea728ba78/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:39.559773    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f62197122538f83943df8b19710794ea6ea9a9ffa884082a1a62435e9b152c3f/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:39.559773    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/eca6768355c74817c50b811b96b5fcc93a181c4968c53d4d4b0d0252ff6dbd0a/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:39.559859    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a7281d6e698ea2dc42d7d3093ccde32b770bf8367fdb58230694380f40daeb9f/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:39.559859    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.879224940Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:39.559940    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.879310840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:39.559940    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.879325040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.879857239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.050226267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.051715465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.056267457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.056729856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.064877643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.065332743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.065495042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.065849742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.091573301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.091639201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.091652401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.091761800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:30 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:30Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.923135971Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.924017669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.924165569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.924385369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.955673419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.955753819Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.955772119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.956168818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.560087    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.964148405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:39.560617    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.964256705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:39.560617    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.964669604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.560617    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.964999404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.560713    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7a2f0ccaf5c4c6c0019124eda20c358dfa8aa20f0c92ade10aa3de32608e3527/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:39.560713    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/889c16eb0ab731956d02a28d0337dc6ff349dc574ba10d4fc1a939fb2e09d6d3/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:39.560790    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.391303322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:39.560790    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.391389722Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:39.560790    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.391408822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.560855    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.391535621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.560908    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.413113087Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:39.560908    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.413460286Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:39.560908    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.413726486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.560908    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.414492285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.560908    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5ecbdcbdad3fa79af8ef70896ae67d65b14c47b5811078c5d6d167e0f295d1bc/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:39.560908    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.850170088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:39.560908    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.850431387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:39.560908    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.850449987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.560908    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.850590387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.560908    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:03.011137468Z" level=info msg="shim disconnected" id=787ade2ea2cd019bbcde5523b750659f59b0187d9de4b757fba8a14f9a126460 namespace=moby
	I0318 12:44:39.560908    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:03.011334567Z" level=warning msg="cleaning up after shim disconnected" id=787ade2ea2cd019bbcde5523b750659f59b0187d9de4b757fba8a14f9a126460 namespace=moby
	I0318 12:44:39.560908    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:03.011364567Z" level=info msg="cleaning up dead shim" namespace=moby
	I0318 12:44:39.560908    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 dockerd[1042]: time="2024-03-18T12:44:03.012148165Z" level=info msg="ignoring event" container=787ade2ea2cd019bbcde5523b750659f59b0187d9de4b757fba8a14f9a126460 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0318 12:44:39.560908    5712 command_runner.go:130] > Mar 18 12:44:17 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:17.562340104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:39.560908    5712 command_runner.go:130] > Mar 18 12:44:17 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:17.562524303Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:39.560908    5712 command_runner.go:130] > Mar 18 12:44:17 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:17.562584503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.561449    5712 command_runner.go:130] > Mar 18 12:44:17 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:17.563253802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.561449    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.376262769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:39.561449    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.376780468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:39.561449    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.377021468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.561449    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.377223268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.561553    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:44:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1090dd57409807a15613607fd810b67863a9dd9c5a8512d7a6720906641c7f26/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:39.561553    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.684170919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:39.561553    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.684458920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:39.561618    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.684558520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.561644    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.685142822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.561674    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.901354745Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:39.561714    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.901518146Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:39.561714    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.901538746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.561714    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.901651446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.561839    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:44:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e1b2432b0ed66a1175586c13232eb9b9239f18a4f9a86e2a0c5f48c1407fdb14/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0318 12:44:39.561839    5712 command_runner.go:130] > Mar 18 12:44:36 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:36.227440411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:39.561839    5712 command_runner.go:130] > Mar 18 12:44:36 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:36.227939926Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:39.561839    5712 command_runner.go:130] > Mar 18 12:44:36 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:36.228081131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.561981    5712 command_runner.go:130] > Mar 18 12:44:36 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:36.228507343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:39.561981    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:39.561981    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:39.562089    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:39.562089    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:39.562089    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:39.562162    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:39.562162    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:39.597174    5712 logs.go:123] Gathering logs for describe nodes ...
	I0318 12:44:39.597174    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 12:44:39.848049    5712 command_runner.go:130] > Name:               multinode-642600
	I0318 12:44:39.848049    5712 command_runner.go:130] > Roles:              control-plane
	I0318 12:44:39.848049    5712 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0318 12:44:39.848049    5712 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0318 12:44:39.848049    5712 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0318 12:44:39.848049    5712 command_runner.go:130] >                     kubernetes.io/hostname=multinode-642600
	I0318 12:44:39.848049    5712 command_runner.go:130] >                     kubernetes.io/os=linux
	I0318 12:44:39.848049    5712 command_runner.go:130] >                     minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd
	I0318 12:44:39.848049    5712 command_runner.go:130] >                     minikube.k8s.io/name=multinode-642600
	I0318 12:44:39.848049    5712 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0318 12:44:39.848049    5712 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_18T12_18_52_0700
	I0318 12:44:39.848049    5712 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0318 12:44:39.848049    5712 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0318 12:44:39.848049    5712 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0318 12:44:39.848049    5712 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0318 12:44:39.848049    5712 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0318 12:44:39.848049    5712 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0318 12:44:39.848049    5712 command_runner.go:130] > CreationTimestamp:  Mon, 18 Mar 2024 12:18:46 +0000
	I0318 12:44:39.848049    5712 command_runner.go:130] > Taints:             <none>
	I0318 12:44:39.848049    5712 command_runner.go:130] > Unschedulable:      false
	I0318 12:44:39.848049    5712 command_runner.go:130] > Lease:
	I0318 12:44:39.848049    5712 command_runner.go:130] >   HolderIdentity:  multinode-642600
	I0318 12:44:39.848049    5712 command_runner.go:130] >   AcquireTime:     <unset>
	I0318 12:44:39.848049    5712 command_runner.go:130] >   RenewTime:       Mon, 18 Mar 2024 12:44:31 +0000
	I0318 12:44:39.848049    5712 command_runner.go:130] > Conditions:
	I0318 12:44:39.848049    5712 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0318 12:44:39.849048    5712 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0318 12:44:39.849048    5712 command_runner.go:130] >   MemoryPressure   False   Mon, 18 Mar 2024 12:44:11 +0000   Mon, 18 Mar 2024 12:18:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0318 12:44:39.849048    5712 command_runner.go:130] >   DiskPressure     False   Mon, 18 Mar 2024 12:44:11 +0000   Mon, 18 Mar 2024 12:18:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0318 12:44:39.849048    5712 command_runner.go:130] >   PIDPressure      False   Mon, 18 Mar 2024 12:44:11 +0000   Mon, 18 Mar 2024 12:18:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0318 12:44:39.849048    5712 command_runner.go:130] >   Ready            True    Mon, 18 Mar 2024 12:44:11 +0000   Mon, 18 Mar 2024 12:44:11 +0000   KubeletReady                 kubelet is posting ready status
	I0318 12:44:39.849048    5712 command_runner.go:130] > Addresses:
	I0318 12:44:39.849048    5712 command_runner.go:130] >   InternalIP:  172.25.148.129
	I0318 12:44:39.849048    5712 command_runner.go:130] >   Hostname:    multinode-642600
	I0318 12:44:39.849048    5712 command_runner.go:130] > Capacity:
	I0318 12:44:39.849048    5712 command_runner.go:130] >   cpu:                2
	I0318 12:44:39.849048    5712 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 12:44:39.849048    5712 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 12:44:39.849048    5712 command_runner.go:130] >   memory:             2164268Ki
	I0318 12:44:39.849048    5712 command_runner.go:130] >   pods:               110
	I0318 12:44:39.849048    5712 command_runner.go:130] > Allocatable:
	I0318 12:44:39.849048    5712 command_runner.go:130] >   cpu:                2
	I0318 12:44:39.849048    5712 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 12:44:39.849048    5712 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 12:44:39.849048    5712 command_runner.go:130] >   memory:             2164268Ki
	I0318 12:44:39.849048    5712 command_runner.go:130] >   pods:               110
	I0318 12:44:39.849048    5712 command_runner.go:130] > System Info:
	I0318 12:44:39.849048    5712 command_runner.go:130] >   Machine ID:                 021cb44913fc4689ab25739f723ae3da
	I0318 12:44:39.849048    5712 command_runner.go:130] >   System UUID:                8a1bcbab-f132-7f42-b33a-a7db97e0afe6
	I0318 12:44:39.849048    5712 command_runner.go:130] >   Boot ID:                    f11360a5-920e-4374-9d22-d06f111079d8
	I0318 12:44:39.849048    5712 command_runner.go:130] >   Kernel Version:             5.10.207
	I0318 12:44:39.849048    5712 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0318 12:44:39.849048    5712 command_runner.go:130] >   Operating System:           linux
	I0318 12:44:39.849048    5712 command_runner.go:130] >   Architecture:               amd64
	I0318 12:44:39.849048    5712 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0318 12:44:39.849048    5712 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0318 12:44:39.849048    5712 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0318 12:44:39.849048    5712 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0318 12:44:39.849048    5712 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0318 12:44:39.849048    5712 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0318 12:44:39.849048    5712 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0318 12:44:39.849048    5712 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0318 12:44:39.849048    5712 command_runner.go:130] >   default                     busybox-5b5d89c9d6-48qkw                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0318 12:44:39.849048    5712 command_runner.go:130] >   kube-system                 coredns-5dd5756b68-fgn7v                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     25m
	I0318 12:44:39.849048    5712 command_runner.go:130] >   kube-system                 etcd-multinode-642600                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         68s
	I0318 12:44:39.849048    5712 command_runner.go:130] >   kube-system                 kindnet-kpt4f                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      25m
	I0318 12:44:39.849048    5712 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-642600             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	I0318 12:44:39.849048    5712 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-642600    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	I0318 12:44:39.849048    5712 command_runner.go:130] >   kube-system                 kube-proxy-4dg79                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	I0318 12:44:39.849048    5712 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-642600             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	I0318 12:44:39.849048    5712 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	I0318 12:44:39.849048    5712 command_runner.go:130] > Allocated resources:
	I0318 12:44:39.849048    5712 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0318 12:44:39.849048    5712 command_runner.go:130] >   Resource           Requests     Limits
	I0318 12:44:39.849048    5712 command_runner.go:130] >   --------           --------     ------
	I0318 12:44:39.849048    5712 command_runner.go:130] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0318 12:44:39.849048    5712 command_runner.go:130] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0318 12:44:39.849851    5712 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0318 12:44:39.849851    5712 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0318 12:44:39.849851    5712 command_runner.go:130] > Events:
	I0318 12:44:39.849851    5712 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0318 12:44:39.849851    5712 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0318 12:44:39.849851    5712 command_runner.go:130] >   Normal  Starting                 25m                kube-proxy       
	I0318 12:44:39.849851    5712 command_runner.go:130] >   Normal  Starting                 65s                kube-proxy       
	I0318 12:44:39.849851    5712 command_runner.go:130] >   Normal  Starting                 25m                kubelet          Starting kubelet.
	I0318 12:44:39.849851    5712 command_runner.go:130] >   Normal  NodeHasSufficientMemory  25m (x8 over 25m)  kubelet          Node multinode-642600 status is now: NodeHasSufficientMemory
	I0318 12:44:39.849997    5712 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    25m (x8 over 25m)  kubelet          Node multinode-642600 status is now: NodeHasNoDiskPressure
	I0318 12:44:39.849997    5712 command_runner.go:130] >   Normal  NodeHasSufficientPID     25m (x7 over 25m)  kubelet          Node multinode-642600 status is now: NodeHasSufficientPID
	I0318 12:44:39.850029    5712 command_runner.go:130] >   Normal  NodeAllocatableEnforced  25m                kubelet          Updated Node Allocatable limit across pods
	I0318 12:44:39.850029    5712 command_runner.go:130] >   Normal  Starting                 25m                kubelet          Starting kubelet.
	I0318 12:44:39.850060    5712 command_runner.go:130] >   Normal  NodeHasSufficientPID     25m                kubelet          Node multinode-642600 status is now: NodeHasSufficientPID
	I0318 12:44:39.850060    5712 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    25m                kubelet          Node multinode-642600 status is now: NodeHasNoDiskPressure
	I0318 12:44:39.850060    5712 command_runner.go:130] >   Normal  NodeAllocatableEnforced  25m                kubelet          Updated Node Allocatable limit across pods
	I0318 12:44:39.850060    5712 command_runner.go:130] >   Normal  NodeHasSufficientMemory  25m                kubelet          Node multinode-642600 status is now: NodeHasSufficientMemory
	I0318 12:44:39.850060    5712 command_runner.go:130] >   Normal  RegisteredNode           25m                node-controller  Node multinode-642600 event: Registered Node multinode-642600 in Controller
	I0318 12:44:39.850060    5712 command_runner.go:130] >   Normal  NodeReady                25m                kubelet          Node multinode-642600 status is now: NodeReady
	I0318 12:44:39.850060    5712 command_runner.go:130] >   Normal  Starting                 75s                kubelet          Starting kubelet.
	I0318 12:44:39.850060    5712 command_runner.go:130] >   Normal  NodeHasSufficientMemory  75s (x8 over 75s)  kubelet          Node multinode-642600 status is now: NodeHasSufficientMemory
	I0318 12:44:39.850060    5712 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    75s (x8 over 75s)  kubelet          Node multinode-642600 status is now: NodeHasNoDiskPressure
	I0318 12:44:39.850060    5712 command_runner.go:130] >   Normal  NodeHasSufficientPID     75s (x7 over 75s)  kubelet          Node multinode-642600 status is now: NodeHasSufficientPID
	I0318 12:44:39.850060    5712 command_runner.go:130] >   Normal  NodeAllocatableEnforced  75s                kubelet          Updated Node Allocatable limit across pods
	I0318 12:44:39.850060    5712 command_runner.go:130] >   Normal  RegisteredNode           56s                node-controller  Node multinode-642600 event: Registered Node multinode-642600 in Controller
	I0318 12:44:39.850060    5712 command_runner.go:130] > Name:               multinode-642600-m02
	I0318 12:44:39.850060    5712 command_runner.go:130] > Roles:              <none>
	I0318 12:44:39.850060    5712 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0318 12:44:39.850060    5712 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0318 12:44:39.850060    5712 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0318 12:44:39.850060    5712 command_runner.go:130] >                     kubernetes.io/hostname=multinode-642600-m02
	I0318 12:44:39.850060    5712 command_runner.go:130] >                     kubernetes.io/os=linux
	I0318 12:44:39.850060    5712 command_runner.go:130] >                     minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd
	I0318 12:44:39.850060    5712 command_runner.go:130] >                     minikube.k8s.io/name=multinode-642600
	I0318 12:44:39.850060    5712 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0318 12:44:39.850060    5712 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_18T12_22_13_0700
	I0318 12:44:39.850060    5712 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0318 12:44:39.850060    5712 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0318 12:44:39.850060    5712 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0318 12:44:39.850060    5712 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0318 12:44:39.850060    5712 command_runner.go:130] > CreationTimestamp:  Mon, 18 Mar 2024 12:22:12 +0000
	I0318 12:44:39.850060    5712 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0318 12:44:39.850060    5712 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0318 12:44:39.850060    5712 command_runner.go:130] > Unschedulable:      false
	I0318 12:44:39.850060    5712 command_runner.go:130] > Lease:
	I0318 12:44:39.850060    5712 command_runner.go:130] >   HolderIdentity:  multinode-642600-m02
	I0318 12:44:39.850060    5712 command_runner.go:130] >   AcquireTime:     <unset>
	I0318 12:44:39.850060    5712 command_runner.go:130] >   RenewTime:       Mon, 18 Mar 2024 12:40:15 +0000
	I0318 12:44:39.850060    5712 command_runner.go:130] > Conditions:
	I0318 12:44:39.850060    5712 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0318 12:44:39.850060    5712 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0318 12:44:39.850060    5712 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 18 Mar 2024 12:38:34 +0000   Mon, 18 Mar 2024 12:44:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:39.850060    5712 command_runner.go:130] >   DiskPressure     Unknown   Mon, 18 Mar 2024 12:38:34 +0000   Mon, 18 Mar 2024 12:44:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:39.850603    5712 command_runner.go:130] >   PIDPressure      Unknown   Mon, 18 Mar 2024 12:38:34 +0000   Mon, 18 Mar 2024 12:44:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:39.850603    5712 command_runner.go:130] >   Ready            Unknown   Mon, 18 Mar 2024 12:38:34 +0000   Mon, 18 Mar 2024 12:44:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:39.850603    5712 command_runner.go:130] > Addresses:
	I0318 12:44:39.850603    5712 command_runner.go:130] >   InternalIP:  172.25.159.102
	I0318 12:44:39.850603    5712 command_runner.go:130] >   Hostname:    multinode-642600-m02
	I0318 12:44:39.850603    5712 command_runner.go:130] > Capacity:
	I0318 12:44:39.850603    5712 command_runner.go:130] >   cpu:                2
	I0318 12:44:39.850603    5712 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 12:44:39.850603    5712 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 12:44:39.850739    5712 command_runner.go:130] >   memory:             2164268Ki
	I0318 12:44:39.850739    5712 command_runner.go:130] >   pods:               110
	I0318 12:44:39.850739    5712 command_runner.go:130] > Allocatable:
	I0318 12:44:39.850775    5712 command_runner.go:130] >   cpu:                2
	I0318 12:44:39.850775    5712 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 12:44:39.850775    5712 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 12:44:39.850804    5712 command_runner.go:130] >   memory:             2164268Ki
	I0318 12:44:39.850804    5712 command_runner.go:130] >   pods:               110
	I0318 12:44:39.850804    5712 command_runner.go:130] > System Info:
	I0318 12:44:39.850804    5712 command_runner.go:130] >   Machine ID:                 3840c114554e41ff9ded1410244d8aba
	I0318 12:44:39.850804    5712 command_runner.go:130] >   System UUID:                23dbf5b1-f940-4749-8caf-1ae12d869a30
	I0318 12:44:39.850804    5712 command_runner.go:130] >   Boot ID:                    9a3fcab5-beb6-4505-b112-82809850bba3
	I0318 12:44:39.850804    5712 command_runner.go:130] >   Kernel Version:             5.10.207
	I0318 12:44:39.850804    5712 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0318 12:44:39.850804    5712 command_runner.go:130] >   Operating System:           linux
	I0318 12:44:39.850804    5712 command_runner.go:130] >   Architecture:               amd64
	I0318 12:44:39.850804    5712 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0318 12:44:39.850804    5712 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0318 12:44:39.850804    5712 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0318 12:44:39.850804    5712 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0318 12:44:39.850804    5712 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0318 12:44:39.850804    5712 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0318 12:44:39.850804    5712 command_runner.go:130] >   Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0318 12:44:39.850804    5712 command_runner.go:130] >   ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	I0318 12:44:39.850804    5712 command_runner.go:130] >   default                     busybox-5b5d89c9d6-hmhdf    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0318 12:44:39.850804    5712 command_runner.go:130] >   kube-system                 kindnet-d5llj               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      22m
	I0318 12:44:39.850804    5712 command_runner.go:130] >   kube-system                 kube-proxy-vts9f            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	I0318 12:44:39.850804    5712 command_runner.go:130] > Allocated resources:
	I0318 12:44:39.850804    5712 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0318 12:44:39.850804    5712 command_runner.go:130] >   Resource           Requests   Limits
	I0318 12:44:39.850804    5712 command_runner.go:130] >   --------           --------   ------
	I0318 12:44:39.850804    5712 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0318 12:44:39.850804    5712 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0318 12:44:39.850804    5712 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0318 12:44:39.850804    5712 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0318 12:44:39.850804    5712 command_runner.go:130] > Events:
	I0318 12:44:39.850804    5712 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0318 12:44:39.850804    5712 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0318 12:44:39.850804    5712 command_runner.go:130] >   Normal  Starting                 22m                kube-proxy       
	I0318 12:44:39.850804    5712 command_runner.go:130] >   Normal  NodeHasSufficientMemory  22m (x5 over 22m)  kubelet          Node multinode-642600-m02 status is now: NodeHasSufficientMemory
	I0318 12:44:39.850804    5712 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    22m (x5 over 22m)  kubelet          Node multinode-642600-m02 status is now: NodeHasNoDiskPressure
	I0318 12:44:39.850804    5712 command_runner.go:130] >   Normal  NodeHasSufficientPID     22m (x5 over 22m)  kubelet          Node multinode-642600-m02 status is now: NodeHasSufficientPID
	I0318 12:44:39.850804    5712 command_runner.go:130] >   Normal  RegisteredNode           22m                node-controller  Node multinode-642600-m02 event: Registered Node multinode-642600-m02 in Controller
	I0318 12:44:39.850804    5712 command_runner.go:130] >   Normal  NodeReady                22m                kubelet          Node multinode-642600-m02 status is now: NodeReady
	I0318 12:44:39.850804    5712 command_runner.go:130] >   Normal  RegisteredNode           56s                node-controller  Node multinode-642600-m02 event: Registered Node multinode-642600-m02 in Controller
	I0318 12:44:39.850804    5712 command_runner.go:130] >   Normal  NodeNotReady             16s                node-controller  Node multinode-642600-m02 status is now: NodeNotReady
	I0318 12:44:39.850804    5712 command_runner.go:130] > Name:               multinode-642600-m03
	I0318 12:44:39.850804    5712 command_runner.go:130] > Roles:              <none>
	I0318 12:44:39.850804    5712 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0318 12:44:39.850804    5712 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0318 12:44:39.850804    5712 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0318 12:44:39.850804    5712 command_runner.go:130] >                     kubernetes.io/hostname=multinode-642600-m03
	I0318 12:44:39.850804    5712 command_runner.go:130] >                     kubernetes.io/os=linux
	I0318 12:44:39.850804    5712 command_runner.go:130] >                     minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd
	I0318 12:44:39.850804    5712 command_runner.go:130] >                     minikube.k8s.io/name=multinode-642600
	I0318 12:44:39.850804    5712 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0318 12:44:39.850804    5712 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_18T12_38_47_0700
	I0318 12:44:39.850804    5712 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0318 12:44:39.850804    5712 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0318 12:44:39.850804    5712 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0318 12:44:39.850804    5712 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0318 12:44:39.851382    5712 command_runner.go:130] > CreationTimestamp:  Mon, 18 Mar 2024 12:38:46 +0000
	I0318 12:44:39.851382    5712 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0318 12:44:39.851382    5712 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0318 12:44:39.851382    5712 command_runner.go:130] > Unschedulable:      false
	I0318 12:44:39.851382    5712 command_runner.go:130] > Lease:
	I0318 12:44:39.851382    5712 command_runner.go:130] >   HolderIdentity:  multinode-642600-m03
	I0318 12:44:39.851382    5712 command_runner.go:130] >   AcquireTime:     <unset>
	I0318 12:44:39.851382    5712 command_runner.go:130] >   RenewTime:       Mon, 18 Mar 2024 12:39:48 +0000
	I0318 12:44:39.851382    5712 command_runner.go:130] > Conditions:
	I0318 12:44:39.851382    5712 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0318 12:44:39.851382    5712 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0318 12:44:39.851382    5712 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 18 Mar 2024 12:38:52 +0000   Mon, 18 Mar 2024 12:40:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:39.851382    5712 command_runner.go:130] >   DiskPressure     Unknown   Mon, 18 Mar 2024 12:38:52 +0000   Mon, 18 Mar 2024 12:40:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:39.851382    5712 command_runner.go:130] >   PIDPressure      Unknown   Mon, 18 Mar 2024 12:38:52 +0000   Mon, 18 Mar 2024 12:40:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:39.851557    5712 command_runner.go:130] >   Ready            Unknown   Mon, 18 Mar 2024 12:38:52 +0000   Mon, 18 Mar 2024 12:40:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:39.851557    5712 command_runner.go:130] > Addresses:
	I0318 12:44:39.851557    5712 command_runner.go:130] >   InternalIP:  172.25.157.200
	I0318 12:44:39.851557    5712 command_runner.go:130] >   Hostname:    multinode-642600-m03
	I0318 12:44:39.851557    5712 command_runner.go:130] > Capacity:
	I0318 12:44:39.851557    5712 command_runner.go:130] >   cpu:                2
	I0318 12:44:39.851557    5712 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 12:44:39.851557    5712 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 12:44:39.851557    5712 command_runner.go:130] >   memory:             2164268Ki
	I0318 12:44:39.851642    5712 command_runner.go:130] >   pods:               110
	I0318 12:44:39.851642    5712 command_runner.go:130] > Allocatable:
	I0318 12:44:39.851642    5712 command_runner.go:130] >   cpu:                2
	I0318 12:44:39.851642    5712 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 12:44:39.851642    5712 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 12:44:39.851642    5712 command_runner.go:130] >   memory:             2164268Ki
	I0318 12:44:39.851642    5712 command_runner.go:130] >   pods:               110
	I0318 12:44:39.851642    5712 command_runner.go:130] > System Info:
	I0318 12:44:39.851722    5712 command_runner.go:130] >   Machine ID:                 b858c7f1c1bc42a69e1927ccc26ea5ce
	I0318 12:44:39.851722    5712 command_runner.go:130] >   System UUID:                8c4fd36f-ab8b-5447-9df2-542afafc5ab4
	I0318 12:44:39.851722    5712 command_runner.go:130] >   Boot ID:                    cea0ecfe-24ab-4614-a808-1e2a7a960f26
	I0318 12:44:39.851722    5712 command_runner.go:130] >   Kernel Version:             5.10.207
	I0318 12:44:39.851722    5712 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0318 12:44:39.851722    5712 command_runner.go:130] >   Operating System:           linux
	I0318 12:44:39.851722    5712 command_runner.go:130] >   Architecture:               amd64
	I0318 12:44:39.851722    5712 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0318 12:44:39.851722    5712 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0318 12:44:39.851839    5712 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0318 12:44:39.851839    5712 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0318 12:44:39.851839    5712 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0318 12:44:39.851839    5712 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0318 12:44:39.851839    5712 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0318 12:44:39.851839    5712 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0318 12:44:39.851839    5712 command_runner.go:130] >   kube-system                 kindnet-thkjp       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17m
	I0318 12:44:39.851934    5712 command_runner.go:130] >   kube-system                 kube-proxy-khbjt    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	I0318 12:44:39.851934    5712 command_runner.go:130] > Allocated resources:
	I0318 12:44:39.851934    5712 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0318 12:44:39.851934    5712 command_runner.go:130] >   Resource           Requests   Limits
	I0318 12:44:39.851934    5712 command_runner.go:130] >   --------           --------   ------
	I0318 12:44:39.851934    5712 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0318 12:44:39.852011    5712 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0318 12:44:39.852011    5712 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0318 12:44:39.852011    5712 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0318 12:44:39.852011    5712 command_runner.go:130] > Events:
	I0318 12:44:39.852011    5712 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0318 12:44:39.852108    5712 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0318 12:44:39.852108    5712 command_runner.go:130] >   Normal  Starting                 17m                    kube-proxy       
	I0318 12:44:39.852108    5712 command_runner.go:130] >   Normal  Starting                 5m50s                  kube-proxy       
	I0318 12:44:39.852108    5712 command_runner.go:130] >   Normal  NodeHasSufficientMemory  17m (x5 over 17m)      kubelet          Node multinode-642600-m03 status is now: NodeHasSufficientMemory
	I0318 12:44:39.852187    5712 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    17m (x5 over 17m)      kubelet          Node multinode-642600-m03 status is now: NodeHasNoDiskPressure
	I0318 12:44:39.852187    5712 command_runner.go:130] >   Normal  NodeHasSufficientPID     17m (x5 over 17m)      kubelet          Node multinode-642600-m03 status is now: NodeHasSufficientPID
	I0318 12:44:39.852187    5712 command_runner.go:130] >   Normal  NodeReady                17m                    kubelet          Node multinode-642600-m03 status is now: NodeReady
	I0318 12:44:39.852187    5712 command_runner.go:130] >   Normal  Starting                 5m53s                  kubelet          Starting kubelet.
	I0318 12:44:39.852187    5712 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m53s (x2 over 5m53s)  kubelet          Node multinode-642600-m03 status is now: NodeHasSufficientMemory
	I0318 12:44:39.852187    5712 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m53s (x2 over 5m53s)  kubelet          Node multinode-642600-m03 status is now: NodeHasNoDiskPressure
	I0318 12:44:39.852274    5712 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m53s (x2 over 5m53s)  kubelet          Node multinode-642600-m03 status is now: NodeHasSufficientPID
	I0318 12:44:39.852274    5712 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m53s                  kubelet          Updated Node Allocatable limit across pods
	I0318 12:44:39.852274    5712 command_runner.go:130] >   Normal  RegisteredNode           5m52s                  node-controller  Node multinode-642600-m03 event: Registered Node multinode-642600-m03 in Controller
	I0318 12:44:39.852274    5712 command_runner.go:130] >   Normal  NodeReady                5m47s                  kubelet          Node multinode-642600-m03 status is now: NodeReady
	I0318 12:44:39.852274    5712 command_runner.go:130] >   Normal  NodeNotReady             4m6s                   node-controller  Node multinode-642600-m03 status is now: NodeNotReady
	I0318 12:44:39.852360    5712 command_runner.go:130] >   Normal  RegisteredNode           56s                    node-controller  Node multinode-642600-m03 event: Registered Node multinode-642600-m03 in Controller
	I0318 12:44:39.863571    5712 logs.go:123] Gathering logs for kube-apiserver [a48a6d310b86] ...
	I0318 12:44:39.863571    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a48a6d310b86"
	I0318 12:44:39.896631    5712 command_runner.go:130] ! I0318 12:43:26.873064       1 options.go:220] external host was not specified, using 172.25.148.129
	I0318 12:44:39.896631    5712 command_runner.go:130] ! I0318 12:43:26.879001       1 server.go:148] Version: v1.28.4
	I0318 12:44:39.896631    5712 command_runner.go:130] ! I0318 12:43:26.879883       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:39.896631    5712 command_runner.go:130] ! I0318 12:43:27.623853       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0318 12:44:39.896631    5712 command_runner.go:130] ! I0318 12:43:27.658081       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0318 12:44:39.896631    5712 command_runner.go:130] ! I0318 12:43:27.658128       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0318 12:44:39.896631    5712 command_runner.go:130] ! I0318 12:43:27.660963       1 instance.go:298] Using reconciler: lease
	I0318 12:44:39.896631    5712 command_runner.go:130] ! I0318 12:43:27.814829       1 handler.go:232] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0318 12:44:39.896631    5712 command_runner.go:130] ! W0318 12:43:27.815233       1 genericapiserver.go:744] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:39.896631    5712 command_runner.go:130] ! I0318 12:43:28.557814       1 handler.go:232] Adding GroupVersion  v1 to ResourceManager
	I0318 12:44:39.896631    5712 command_runner.go:130] ! I0318 12:43:28.558168       1 instance.go:709] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0318 12:44:39.896631    5712 command_runner.go:130] ! I0318 12:43:29.283146       1 instance.go:709] API group "resource.k8s.io" is not enabled, skipping.
	I0318 12:44:39.896631    5712 command_runner.go:130] ! I0318 12:43:29.346403       1 handler.go:232] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0318 12:44:39.896631    5712 command_runner.go:130] ! W0318 12:43:29.360856       1 genericapiserver.go:744] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:39.896631    5712 command_runner.go:130] ! W0318 12:43:29.360910       1 genericapiserver.go:744] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:39.896631    5712 command_runner.go:130] ! I0318 12:43:29.361419       1 handler.go:232] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0318 12:44:39.896631    5712 command_runner.go:130] ! W0318 12:43:29.361431       1 genericapiserver.go:744] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:39.896631    5712 command_runner.go:130] ! I0318 12:43:29.362356       1 handler.go:232] Adding GroupVersion autoscaling v2 to ResourceManager
	I0318 12:44:39.896631    5712 command_runner.go:130] ! I0318 12:43:29.365115       1 handler.go:232] Adding GroupVersion autoscaling v1 to ResourceManager
	I0318 12:44:39.896631    5712 command_runner.go:130] ! W0318 12:43:29.365134       1 genericapiserver.go:744] Skipping API autoscaling/v2beta1 because it has no resources.
	I0318 12:44:39.896631    5712 command_runner.go:130] ! W0318 12:43:29.365140       1 genericapiserver.go:744] Skipping API autoscaling/v2beta2 because it has no resources.
	I0318 12:44:39.896631    5712 command_runner.go:130] ! I0318 12:43:29.370774       1 handler.go:232] Adding GroupVersion batch v1 to ResourceManager
	I0318 12:44:39.896631    5712 command_runner.go:130] ! W0318 12:43:29.370809       1 genericapiserver.go:744] Skipping API batch/v1beta1 because it has no resources.
	I0318 12:44:39.896631    5712 command_runner.go:130] ! I0318 12:43:29.375063       1 handler.go:232] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0318 12:44:39.896631    5712 command_runner.go:130] ! W0318 12:43:29.375102       1 genericapiserver.go:744] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:39.896631    5712 command_runner.go:130] ! W0318 12:43:29.375108       1 genericapiserver.go:744] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:39.896631    5712 command_runner.go:130] ! I0318 12:43:29.375862       1 handler.go:232] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0318 12:44:39.896631    5712 command_runner.go:130] ! W0318 12:43:29.375929       1 genericapiserver.go:744] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:39.896631    5712 command_runner.go:130] ! W0318 12:43:29.375979       1 genericapiserver.go:744] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:39.896631    5712 command_runner.go:130] ! I0318 12:43:29.376693       1 handler.go:232] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0318 12:44:39.896631    5712 command_runner.go:130] ! I0318 12:43:29.384185       1 handler.go:232] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0318 12:44:39.896631    5712 command_runner.go:130] ! W0318 12:43:29.384228       1 genericapiserver.go:744] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:39.897352    5712 command_runner.go:130] ! W0318 12:43:29.384236       1 genericapiserver.go:744] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:39.897352    5712 command_runner.go:130] ! I0318 12:43:29.385110       1 handler.go:232] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0318 12:44:39.897352    5712 command_runner.go:130] ! W0318 12:43:29.385148       1 genericapiserver.go:744] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:39.897352    5712 command_runner.go:130] ! W0318 12:43:29.385155       1 genericapiserver.go:744] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:39.897352    5712 command_runner.go:130] ! I0318 12:43:29.388232       1 handler.go:232] Adding GroupVersion policy v1 to ResourceManager
	I0318 12:44:39.897352    5712 command_runner.go:130] ! W0318 12:43:29.388272       1 genericapiserver.go:744] Skipping API policy/v1beta1 because it has no resources.
	I0318 12:44:39.897352    5712 command_runner.go:130] ! I0318 12:43:29.392835       1 handler.go:232] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0318 12:44:39.897352    5712 command_runner.go:130] ! W0318 12:43:29.392872       1 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:39.897352    5712 command_runner.go:130] ! W0318 12:43:29.392880       1 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:39.897567    5712 command_runner.go:130] ! I0318 12:43:29.393504       1 handler.go:232] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0318 12:44:39.897567    5712 command_runner.go:130] ! W0318 12:43:29.393628       1 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:39.897567    5712 command_runner.go:130] ! W0318 12:43:29.393636       1 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:39.897567    5712 command_runner.go:130] ! I0318 12:43:29.401801       1 handler.go:232] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0318 12:44:39.897567    5712 command_runner.go:130] ! W0318 12:43:29.401838       1 genericapiserver.go:744] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:39.897567    5712 command_runner.go:130] ! W0318 12:43:29.401846       1 genericapiserver.go:744] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:39.897688    5712 command_runner.go:130] ! I0318 12:43:29.405508       1 handler.go:232] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0318 12:44:39.897688    5712 command_runner.go:130] ! I0318 12:43:29.409452       1 handler.go:232] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta2 to ResourceManager
	I0318 12:44:39.897688    5712 command_runner.go:130] ! W0318 12:43:29.409492       1 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:39.897688    5712 command_runner.go:130] ! W0318 12:43:29.409500       1 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:39.897688    5712 command_runner.go:130] ! I0318 12:43:29.421682       1 handler.go:232] Adding GroupVersion apps v1 to ResourceManager
	I0318 12:44:39.897688    5712 command_runner.go:130] ! W0318 12:43:29.421819       1 genericapiserver.go:744] Skipping API apps/v1beta2 because it has no resources.
	I0318 12:44:39.897688    5712 command_runner.go:130] ! W0318 12:43:29.421829       1 genericapiserver.go:744] Skipping API apps/v1beta1 because it has no resources.
	I0318 12:44:39.897688    5712 command_runner.go:130] ! I0318 12:43:29.426368       1 handler.go:232] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0318 12:44:39.897688    5712 command_runner.go:130] ! W0318 12:43:29.426405       1 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:39.897688    5712 command_runner.go:130] ! W0318 12:43:29.426413       1 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:39.897688    5712 command_runner.go:130] ! I0318 12:43:29.427337       1 handler.go:232] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0318 12:44:39.897688    5712 command_runner.go:130] ! W0318 12:43:29.427376       1 genericapiserver.go:744] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:39.897688    5712 command_runner.go:130] ! I0318 12:43:29.459555       1 handler.go:232] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0318 12:44:39.897846    5712 command_runner.go:130] ! W0318 12:43:29.459595       1 genericapiserver.go:744] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:39.897872    5712 command_runner.go:130] ! I0318 12:43:30.367734       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 12:44:39.898207    5712 command_runner.go:130] ! I0318 12:43:30.367932       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 12:44:39.898207    5712 command_runner.go:130] ! I0318 12:43:30.368782       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0318 12:44:39.898271    5712 command_runner.go:130] ! I0318 12:43:30.370542       1 secure_serving.go:213] Serving securely on [::]:8443
	I0318 12:44:39.898271    5712 command_runner.go:130] ! I0318 12:43:30.370628       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 12:44:39.898316    5712 command_runner.go:130] ! I0318 12:43:30.371667       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.372321       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.372682       1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.373559       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.373947       1 handler_discovery.go:412] Starting ResourceDiscoveryManager
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.374159       1 available_controller.go:423] Starting AvailableConditionController
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.374194       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.374404       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.374979       1 aggregator.go:164] waiting for initial CRD sync...
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.375087       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.375452       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.376837       1 controller.go:116] Starting legacy_token_tracking_controller
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.377105       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.377485       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.378013       1 controller.go:78] Starting OpenAPI AggregationController
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.378732       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.379224       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.379834       1 apf_controller.go:372] Starting API Priority and Fairness config controller
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.380470       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.380848       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.382047       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.382230       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.383964       1 controller.go:134] Starting OpenAPI controller
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.384158       1 controller.go:85] Starting OpenAPI V3 controller
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.384420       1 naming_controller.go:291] Starting NamingConditionController
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.384790       1 establishing_controller.go:76] Starting EstablishingController
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.385986       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.386163       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.386327       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.474963       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.476622       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.496736       1 shared_informer.go:318] Caches are synced for configmaps
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.497067       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.497511       1 aggregator.go:166] initial CRD sync complete...
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.498503       1 autoregister_controller.go:141] Starting autoregister controller
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.498662       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0318 12:44:39.898344    5712 command_runner.go:130] ! I0318 12:43:30.498825       1 cache.go:39] Caches are synced for autoregister controller
	I0318 12:44:39.898875    5712 command_runner.go:130] ! I0318 12:43:30.570075       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0318 12:44:39.898875    5712 command_runner.go:130] ! I0318 12:43:30.585880       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0318 12:44:39.898920    5712 command_runner.go:130] ! I0318 12:43:30.624565       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0318 12:44:39.898920    5712 command_runner.go:130] ! I0318 12:43:30.681515       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0318 12:44:39.899048    5712 command_runner.go:130] ! I0318 12:43:30.681604       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0318 12:44:39.899048    5712 command_runner.go:130] ! I0318 12:43:31.410513       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0318 12:44:39.899121    5712 command_runner.go:130] ! W0318 12:43:31.917736       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.25.148.129 172.25.151.112]
	I0318 12:44:39.899121    5712 command_runner.go:130] ! I0318 12:43:31.919293       1 controller.go:624] quota admission added evaluator for: endpoints
	I0318 12:44:39.899121    5712 command_runner.go:130] ! I0318 12:43:31.929122       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0318 12:44:39.899121    5712 command_runner.go:130] ! I0318 12:43:34.160688       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0318 12:44:39.899121    5712 command_runner.go:130] ! I0318 12:43:34.367742       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0318 12:44:39.899121    5712 command_runner.go:130] ! I0318 12:43:34.406080       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0318 12:44:39.899121    5712 command_runner.go:130] ! I0318 12:43:34.542647       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0318 12:44:39.899121    5712 command_runner.go:130] ! I0318 12:43:34.562855       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0318 12:44:39.899121    5712 command_runner.go:130] ! W0318 12:43:51.920595       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.25.148.129]
	I0318 12:44:39.908056    5712 logs.go:123] Gathering logs for kube-controller-manager [a54be4436901] ...
	I0318 12:44:39.908056    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54be4436901"
	I0318 12:44:39.946689    5712 command_runner.go:130] ! I0318 12:18:43.818653       1 serving.go:348] Generated self-signed cert in-memory
	I0318 12:44:39.946689    5712 command_runner.go:130] ! I0318 12:18:45.050029       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0318 12:44:39.946689    5712 command_runner.go:130] ! I0318 12:18:45.050365       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:39.946689    5712 command_runner.go:130] ! I0318 12:18:45.053707       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0318 12:44:39.946689    5712 command_runner.go:130] ! I0318 12:18:45.056733       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 12:44:39.946689    5712 command_runner.go:130] ! I0318 12:18:45.057073       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 12:44:39.946689    5712 command_runner.go:130] ! I0318 12:18:45.057232       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 12:44:39.947548    5712 command_runner.go:130] ! I0318 12:18:49.569825       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0318 12:44:39.947548    5712 command_runner.go:130] ! I0318 12:18:49.602388       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0318 12:44:39.947548    5712 command_runner.go:130] ! I0318 12:18:49.603663       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0318 12:44:39.947619    5712 command_runner.go:130] ! I0318 12:18:49.603680       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0318 12:44:39.947619    5712 command_runner.go:130] ! I0318 12:18:49.621364       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0318 12:44:39.947659    5712 command_runner.go:130] ! I0318 12:18:49.621624       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 12:44:39.947659    5712 command_runner.go:130] ! I0318 12:18:49.621432       1 controllermanager.go:642] "Started controller" controller="garbage-collector-controller"
	I0318 12:44:39.947659    5712 command_runner.go:130] ! I0318 12:18:49.622281       1 graph_builder.go:294] "Running" component="GraphBuilder"
	I0318 12:44:39.947659    5712 command_runner.go:130] ! I0318 12:18:49.644362       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I0318 12:44:39.947727    5712 command_runner.go:130] ! I0318 12:18:49.644758       1 stateful_set.go:161] "Starting stateful set controller"
	I0318 12:44:39.947727    5712 command_runner.go:130] ! I0318 12:18:49.646607       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0318 12:44:39.947727    5712 command_runner.go:130] ! I0318 12:18:49.660400       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0318 12:44:39.947825    5712 command_runner.go:130] ! I0318 12:18:49.661053       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0318 12:44:39.947825    5712 command_runner.go:130] ! I0318 12:18:49.670023       1 shared_informer.go:318] Caches are synced for tokens
	I0318 12:44:39.947825    5712 command_runner.go:130] ! I0318 12:18:49.679784       1 controllermanager.go:642] "Started controller" controller="persistentvolume-expander-controller"
	I0318 12:44:39.947825    5712 command_runner.go:130] ! I0318 12:18:49.680015       1 expand_controller.go:328] "Starting expand controller"
	I0318 12:44:39.947825    5712 command_runner.go:130] ! I0318 12:18:49.680028       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0318 12:44:39.947910    5712 command_runner.go:130] ! I0318 12:18:49.692925       1 controllermanager.go:642] "Started controller" controller="clusterrole-aggregation-controller"
	I0318 12:44:39.947910    5712 command_runner.go:130] ! I0318 12:18:49.693164       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0318 12:44:39.947910    5712 command_runner.go:130] ! I0318 12:18:49.693449       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0318 12:44:39.947910    5712 command_runner.go:130] ! I0318 12:18:49.727464       1 controllermanager.go:642] "Started controller" controller="serviceaccount-controller"
	I0318 12:44:39.947910    5712 command_runner.go:130] ! I0318 12:18:49.727835       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0318 12:44:39.947982    5712 command_runner.go:130] ! I0318 12:18:49.727848       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0318 12:44:39.947982    5712 command_runner.go:130] ! I0318 12:18:49.742409       1 controllermanager.go:642] "Started controller" controller="disruption-controller"
	I0318 12:44:39.947982    5712 command_runner.go:130] ! I0318 12:18:49.743029       1 disruption.go:433] "Sending events to api server."
	I0318 12:44:39.947982    5712 command_runner.go:130] ! I0318 12:18:49.743301       1 disruption.go:444] "Starting disruption controller"
	I0318 12:44:39.947982    5712 command_runner.go:130] ! I0318 12:18:49.743449       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0318 12:44:39.947982    5712 command_runner.go:130] ! I0318 12:18:49.759716       1 controllermanager.go:642] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0318 12:44:39.948058    5712 command_runner.go:130] ! I0318 12:18:49.760338       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0318 12:44:39.948058    5712 command_runner.go:130] ! I0318 12:18:49.760376       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0318 12:44:39.948202    5712 command_runner.go:130] ! I0318 12:18:49.829809       1 controllermanager.go:642] "Started controller" controller="endpointslice-mirroring-controller"
	I0318 12:44:39.948202    5712 command_runner.go:130] ! I0318 12:18:49.830343       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0318 12:44:39.948284    5712 command_runner.go:130] ! I0318 12:18:49.830415       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0318 12:44:39.948373    5712 command_runner.go:130] ! I0318 12:18:50.085725       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I0318 12:44:39.948373    5712 command_runner.go:130] ! I0318 12:18:50.086016       1 namespace_controller.go:197] "Starting namespace controller"
	I0318 12:44:39.948462    5712 command_runner.go:130] ! I0318 12:18:50.086167       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0318 12:44:39.948462    5712 command_runner.go:130] ! I0318 12:18:50.234974       1 controllermanager.go:642] "Started controller" controller="replicaset-controller"
	I0318 12:44:39.948537    5712 command_runner.go:130] ! I0318 12:18:50.242121       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0318 12:44:39.948567    5712 command_runner.go:130] ! I0318 12:18:50.242138       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0318 12:44:39.948567    5712 command_runner.go:130] ! I0318 12:18:50.384031       1 controllermanager.go:642] "Started controller" controller="token-cleaner-controller"
	I0318 12:44:39.948645    5712 command_runner.go:130] ! I0318 12:18:50.384090       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0318 12:44:39.948645    5712 command_runner.go:130] ! I0318 12:18:50.384100       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0318 12:44:39.948708    5712 command_runner.go:130] ! I0318 12:18:50.384108       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0318 12:44:39.948708    5712 command_runner.go:130] ! I0318 12:18:50.530182       1 controllermanager.go:642] "Started controller" controller="persistentvolume-protection-controller"
	I0318 12:44:39.948708    5712 command_runner.go:130] ! I0318 12:18:50.530258       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0318 12:44:39.948786    5712 command_runner.go:130] ! I0318 12:18:50.530267       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0318 12:44:39.948814    5712 command_runner.go:130] ! I0318 12:18:50.695232       1 controllermanager.go:642] "Started controller" controller="job-controller"
	I0318 12:44:39.948814    5712 command_runner.go:130] ! I0318 12:18:50.695351       1 job_controller.go:226] "Starting job controller"
	I0318 12:44:39.948814    5712 command_runner.go:130] ! I0318 12:18:50.695361       1 shared_informer.go:311] Waiting for caches to sync for job
	I0318 12:44:39.948814    5712 command_runner.go:130] ! I0318 12:18:50.833418       1 controllermanager.go:642] "Started controller" controller="deployment-controller"
	I0318 12:44:39.948939    5712 command_runner.go:130] ! I0318 12:18:50.833674       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0318 12:44:39.948939    5712 command_runner.go:130] ! I0318 12:18:50.833686       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0318 12:44:39.949052    5712 command_runner.go:130] ! I0318 12:18:50.998838       1 controllermanager.go:642] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0318 12:44:39.949083    5712 command_runner.go:130] ! I0318 12:18:50.999193       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0318 12:44:39.949083    5712 command_runner.go:130] ! I0318 12:18:50.999227       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0318 12:44:39.949149    5712 command_runner.go:130] ! I0318 12:18:51.141445       1 controllermanager.go:642] "Started controller" controller="ttl-after-finished-controller"
	I0318 12:44:39.949195    5712 command_runner.go:130] ! I0318 12:18:51.141508       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0318 12:44:39.949246    5712 command_runner.go:130] ! I0318 12:18:51.141518       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0318 12:44:39.949343    5712 command_runner.go:130] ! I0318 12:18:51.279642       1 controllermanager.go:642] "Started controller" controller="pod-garbage-collector-controller"
	I0318 12:44:39.949477    5712 command_runner.go:130] ! I0318 12:18:51.279728       1 gc_controller.go:101] "Starting GC controller"
	I0318 12:44:39.949504    5712 command_runner.go:130] ! I0318 12:18:51.279742       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0318 12:44:39.949504    5712 command_runner.go:130] ! I0318 12:18:51.429394       1 controllermanager.go:642] "Started controller" controller="cronjob-controller"
	I0318 12:44:39.949583    5712 command_runner.go:130] ! I0318 12:18:51.429600       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0318 12:44:39.949633    5712 command_runner.go:130] ! I0318 12:18:51.429612       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0318 12:44:39.949687    5712 command_runner.go:130] ! I0318 12:19:01.598915       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0318 12:44:39.949716    5712 command_runner.go:130] ! I0318 12:19:01.598966       1 controllermanager.go:642] "Started controller" controller="node-ipam-controller"
	I0318 12:44:39.949787    5712 command_runner.go:130] ! I0318 12:19:01.599163       1 node_ipam_controller.go:162] "Starting ipam controller"
	I0318 12:44:39.949787    5712 command_runner.go:130] ! I0318 12:19:01.599174       1 shared_informer.go:311] Waiting for caches to sync for node
	I0318 12:44:39.949787    5712 command_runner.go:130] ! I0318 12:19:01.601488       1 node_lifecycle_controller.go:431] "Controller will reconcile labels"
	I0318 12:44:39.949931    5712 command_runner.go:130] ! I0318 12:19:01.601803       1 controllermanager.go:642] "Started controller" controller="node-lifecycle-controller"
	I0318 12:44:39.949931    5712 command_runner.go:130] ! I0318 12:19:01.601987       1 node_lifecycle_controller.go:465] "Sending events to api server"
	I0318 12:44:39.949931    5712 command_runner.go:130] ! I0318 12:19:01.602013       1 node_lifecycle_controller.go:476] "Starting node controller"
	I0318 12:44:39.949931    5712 command_runner.go:130] ! I0318 12:19:01.602019       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0318 12:44:39.949931    5712 command_runner.go:130] ! I0318 12:19:01.623744       1 controllermanager.go:642] "Started controller" controller="persistentvolume-binder-controller"
	I0318 12:44:39.949931    5712 command_runner.go:130] ! I0318 12:19:01.624435       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0318 12:44:39.949931    5712 command_runner.go:130] ! I0318 12:19:01.624966       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0318 12:44:39.949931    5712 command_runner.go:130] ! I0318 12:19:01.663430       1 controllermanager.go:642] "Started controller" controller="endpointslice-controller"
	I0318 12:44:39.949931    5712 command_runner.go:130] ! I0318 12:19:01.663839       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0318 12:44:39.949931    5712 command_runner.go:130] ! I0318 12:19:01.663858       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0318 12:44:39.949931    5712 command_runner.go:130] ! I0318 12:19:01.710104       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0318 12:44:39.949931    5712 command_runner.go:130] ! I0318 12:19:01.710384       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0318 12:44:39.949931    5712 command_runner.go:130] ! I0318 12:19:01.710455       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0318 12:44:39.949931    5712 command_runner.go:130] ! I0318 12:19:01.710487       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0318 12:44:39.949931    5712 command_runner.go:130] ! I0318 12:19:01.710760       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0318 12:44:39.949931    5712 command_runner.go:130] ! I0318 12:19:01.710795       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0318 12:44:39.949931    5712 command_runner.go:130] ! I0318 12:19:01.710822       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0318 12:44:39.950470    5712 command_runner.go:130] ! I0318 12:19:01.710886       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0318 12:44:39.950470    5712 command_runner.go:130] ! I0318 12:19:01.710930       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0318 12:44:39.950517    5712 command_runner.go:130] ! I0318 12:19:01.710986       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0318 12:44:39.950517    5712 command_runner.go:130] ! I0318 12:19:01.711095       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0318 12:44:39.950517    5712 command_runner.go:130] ! I0318 12:19:01.711137       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0318 12:44:39.950563    5712 command_runner.go:130] ! I0318 12:19:01.711160       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0318 12:44:39.950563    5712 command_runner.go:130] ! I0318 12:19:01.711179       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0318 12:44:39.950563    5712 command_runner.go:130] ! I0318 12:19:01.711211       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0318 12:44:39.950563    5712 command_runner.go:130] ! I0318 12:19:01.711237       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0318 12:44:39.950563    5712 command_runner.go:130] ! I0318 12:19:01.711261       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0318 12:44:39.950563    5712 command_runner.go:130] ! I0318 12:19:01.711286       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0318 12:44:39.950638    5712 command_runner.go:130] ! I0318 12:19:01.711316       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0318 12:44:39.950638    5712 command_runner.go:130] ! I0318 12:19:01.711339       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0318 12:44:39.950638    5712 command_runner.go:130] ! I0318 12:19:01.711356       1 controllermanager.go:642] "Started controller" controller="resourcequota-controller"
	I0318 12:44:39.950638    5712 command_runner.go:130] ! I0318 12:19:01.711486       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0318 12:44:39.950720    5712 command_runner.go:130] ! I0318 12:19:01.711654       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 12:44:39.950720    5712 command_runner.go:130] ! I0318 12:19:01.711784       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0318 12:44:39.950757    5712 command_runner.go:130] ! I0318 12:19:01.715155       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:01.715586       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:01.715886       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:01.732340       1 controllermanager.go:642] "Started controller" controller="ttl-controller"
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:01.732695       1 ttl_controller.go:124] "Starting TTL controller"
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:01.732944       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:01.747011       1 controllermanager.go:642] "Started controller" controller="endpoints-controller"
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:01.747361       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:01.747484       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0318 12:44:39.950787    5712 command_runner.go:130] ! E0318 12:19:01.771424       1 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:01.771527       1 controllermanager.go:620] "Warning: skipping controller" controller="service-lb-controller"
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:01.771544       1 core.go:228] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:01.772072       1 controllermanager.go:620] "Warning: skipping controller" controller="node-route-controller"
	I0318 12:44:39.950787    5712 command_runner.go:130] ! E0318 12:19:01.775461       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:01.775656       1 controllermanager.go:620] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:01.788795       1 controllermanager.go:642] "Started controller" controller="ephemeral-volume-controller"
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:01.789335       1 controller.go:169] "Starting ephemeral volume controller"
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:01.789368       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:01.809091       1 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:01.809368       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:01.809720       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:01.846190       1 controllermanager.go:642] "Started controller" controller="daemonset-controller"
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:01.846779       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:01.846879       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:02.137994       1 controllermanager.go:642] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:02.138059       1 horizontal.go:200] "Starting HPA controller"
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:02.138069       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:02.189502       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:02.189864       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:02.190041       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:02.191172       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:02.191256       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0318 12:44:39.950787    5712 command_runner.go:130] ! I0318 12:19:02.191347       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:39.951353    5712 command_runner.go:130] ! I0318 12:19:02.193057       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0318 12:44:39.951353    5712 command_runner.go:130] ! I0318 12:19:02.193152       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.193246       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.194807       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.194851       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.195648       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.194886       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.345061       1 controllermanager.go:642] "Started controller" controller="bootstrap-signer-controller"
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.347311       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.364524       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.380069       1 shared_informer.go:318] Caches are synced for expand
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.390503       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.391317       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.393201       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.402532       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.419971       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.421082       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600\" does not exist"
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.427201       1 shared_informer.go:318] Caches are synced for persistent volume
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.427876       1 shared_informer.go:318] Caches are synced for service account
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.429003       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.429629       1 shared_informer.go:318] Caches are synced for cronjob
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.430311       1 shared_informer.go:318] Caches are synced for PV protection
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.432115       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.434603       1 shared_informer.go:318] Caches are synced for TTL
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.437362       1 shared_informer.go:318] Caches are synced for deployment
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.438306       1 shared_informer.go:318] Caches are synced for HPA
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.441785       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.442916       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.444302       1 shared_informer.go:318] Caches are synced for disruption
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.447137       1 shared_informer.go:318] Caches are synced for daemon sets
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.447694       1 shared_informer.go:318] Caches are synced for endpoint
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.452098       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.454023       1 shared_informer.go:318] Caches are synced for stateful set
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.461158       1 shared_informer.go:318] Caches are synced for crt configmap
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.464623       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.480847       1 shared_informer.go:318] Caches are synced for GC
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.487772       1 shared_informer.go:318] Caches are synced for namespace
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.490082       1 shared_informer.go:318] Caches are synced for ephemeral
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.494160       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0318 12:44:39.951413    5712 command_runner.go:130] ! I0318 12:19:02.499312       1 shared_informer.go:318] Caches are synced for node
	I0318 12:44:39.951945    5712 command_runner.go:130] ! I0318 12:19:02.499587       1 range_allocator.go:174] "Sending events to api server"
	I0318 12:44:39.951945    5712 command_runner.go:130] ! I0318 12:19:02.499772       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0318 12:44:39.951945    5712 command_runner.go:130] ! I0318 12:19:02.500365       1 shared_informer.go:318] Caches are synced for attach detach
	I0318 12:44:39.951945    5712 command_runner.go:130] ! I0318 12:19:02.500954       1 shared_informer.go:318] Caches are synced for job
	I0318 12:44:39.951945    5712 command_runner.go:130] ! I0318 12:19:02.501438       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0318 12:44:39.951945    5712 command_runner.go:130] ! I0318 12:19:02.501724       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0318 12:44:39.951945    5712 command_runner.go:130] ! I0318 12:19:02.503931       1 shared_informer.go:318] Caches are synced for PVC protection
	I0318 12:44:39.952089    5712 command_runner.go:130] ! I0318 12:19:02.509883       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0318 12:44:39.952089    5712 command_runner.go:130] ! I0318 12:19:02.528934       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-642600" podCIDRs=["10.244.0.0/24"]
	I0318 12:44:39.952089    5712 command_runner.go:130] ! I0318 12:19:02.565942       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 12:44:39.952089    5712 command_runner.go:130] ! I0318 12:19:02.603468       1 shared_informer.go:318] Caches are synced for taint
	I0318 12:44:39.952089    5712 command_runner.go:130] ! I0318 12:19:02.603627       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0318 12:44:39.952089    5712 command_runner.go:130] ! I0318 12:19:02.603721       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-642600"
	I0318 12:44:39.952178    5712 command_runner.go:130] ! I0318 12:19:02.603760       1 node_lifecycle_controller.go:1029] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0318 12:44:39.952178    5712 command_runner.go:130] ! I0318 12:19:02.603782       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0318 12:44:39.952178    5712 command_runner.go:130] ! I0318 12:19:02.603821       1 taint_manager.go:210] "Sending events to api server"
	I0318 12:44:39.952178    5712 command_runner.go:130] ! I0318 12:19:02.605481       1 event.go:307] "Event occurred" object="multinode-642600" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600 event: Registered Node multinode-642600 in Controller"
	I0318 12:44:39.952178    5712 command_runner.go:130] ! I0318 12:19:02.613688       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 12:44:39.952245    5712 command_runner.go:130] ! I0318 12:19:02.644197       1 event.go:307] "Event occurred" object="kube-system/etcd-multinode-642600" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:39.952245    5712 command_runner.go:130] ! I0318 12:19:02.675188       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-multinode-642600" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:39.952245    5712 command_runner.go:130] ! I0318 12:19:02.675510       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-multinode-642600" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:39.952360    5712 command_runner.go:130] ! I0318 12:19:02.681286       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-multinode-642600" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:39.952360    5712 command_runner.go:130] ! I0318 12:19:03.023915       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:03.023946       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:03.029139       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:03.075135       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:03.175071       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-kpt4f"
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:03.181384       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-4dg79"
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:03.624405       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-fgn7v"
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:03.691902       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-xkgdt"
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:03.810454       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="734.97569ms"
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:03.847906       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="37.087083ms"
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:03.945758       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="97.729709ms"
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:03.945958       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="107.501µs"
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:04.640409       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:04.732241       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-xkgdt"
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:04.763359       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="121.567183ms"
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:04.828298       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="64.870031ms"
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:04.890459       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.083804ms"
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:04.890764       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.4µs"
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:15.938090       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="157.9µs"
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:15.982953       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="121.301µs"
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:17.607464       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:19.208242       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="102.7µs"
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:19.274086       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="22.124146ms"
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:19:19.275145       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="211.9µs"
	I0318 12:44:39.952387    5712 command_runner.go:130] ! I0318 12:22:12.652722       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600-m02\" does not exist"
	I0318 12:44:39.952920    5712 command_runner.go:130] ! I0318 12:22:12.679760       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-642600-m02" podCIDRs=["10.244.1.0/24"]
	I0318 12:44:39.952920    5712 command_runner.go:130] ! I0318 12:22:12.706735       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-d5llj"
	I0318 12:44:39.952920    5712 command_runner.go:130] ! I0318 12:22:12.706774       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-vts9f"
	I0318 12:44:39.952920    5712 command_runner.go:130] ! I0318 12:22:17.642129       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-642600-m02"
	I0318 12:44:39.953018    5712 command_runner.go:130] ! I0318 12:22:17.642212       1 event.go:307] "Event occurred" object="multinode-642600-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600-m02 event: Registered Node multinode-642600-m02 in Controller"
	I0318 12:44:39.953045    5712 command_runner.go:130] ! I0318 12:22:34.263318       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:39.953045    5712 command_runner.go:130] ! I0318 12:23:01.851486       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5b5d89c9d6 to 2"
	I0318 12:44:39.953045    5712 command_runner.go:130] ! I0318 12:23:01.881281       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-hmhdf"
	I0318 12:44:39.953144    5712 command_runner.go:130] ! I0318 12:23:01.924301       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-48qkw"
	I0318 12:44:39.953144    5712 command_runner.go:130] ! I0318 12:23:01.946058       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="91.676064ms"
	I0318 12:44:39.953144    5712 command_runner.go:130] ! I0318 12:23:02.049702       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="103.251772ms"
	I0318 12:44:39.953144    5712 command_runner.go:130] ! I0318 12:23:02.049789       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="35.4µs"
	I0318 12:44:39.953224    5712 command_runner.go:130] ! I0318 12:23:04.783277       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="15.030749ms"
	I0318 12:44:39.953224    5712 command_runner.go:130] ! I0318 12:23:04.783520       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="39.9µs"
	I0318 12:44:39.953224    5712 command_runner.go:130] ! I0318 12:23:05.441638       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="14.350047ms"
	I0318 12:44:39.953305    5712 command_runner.go:130] ! I0318 12:23:05.441876       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="105µs"
	I0318 12:44:39.953393    5712 command_runner.go:130] ! I0318 12:27:09.073772       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600-m03\" does not exist"
	I0318 12:44:39.953393    5712 command_runner.go:130] ! I0318 12:27:09.075345       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:39.953393    5712 command_runner.go:130] ! I0318 12:27:09.095707       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-642600-m03" podCIDRs=["10.244.2.0/24"]
	I0318 12:44:39.953393    5712 command_runner.go:130] ! I0318 12:27:09.110695       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-khbjt"
	I0318 12:44:39.953393    5712 command_runner.go:130] ! I0318 12:27:09.110730       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-thkjp"
	I0318 12:44:39.953521    5712 command_runner.go:130] ! I0318 12:27:12.715112       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-642600-m03"
	I0318 12:44:39.953521    5712 command_runner.go:130] ! I0318 12:27:12.715611       1 event.go:307] "Event occurred" object="multinode-642600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600-m03 event: Registered Node multinode-642600-m03 in Controller"
	I0318 12:44:39.953586    5712 command_runner.go:130] ! I0318 12:27:30.856729       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:39.953614    5712 command_runner.go:130] ! I0318 12:35:52.853028       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:39.953614    5712 command_runner.go:130] ! I0318 12:35:52.854041       1 event.go:307] "Event occurred" object="multinode-642600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-642600-m03 status is now: NodeNotReady"
	I0318 12:44:39.953614    5712 command_runner.go:130] ! I0318 12:35:52.871920       1 event.go:307] "Event occurred" object="kube-system/kindnet-thkjp" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:39.953675    5712 command_runner.go:130] ! I0318 12:35:52.891158       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-khbjt" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:39.953675    5712 command_runner.go:130] ! I0318 12:38:40.101072       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:39.953675    5712 command_runner.go:130] ! I0318 12:38:42.930337       1 event.go:307] "Event occurred" object="multinode-642600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-642600-m03 event: Removing Node multinode-642600-m03 from Controller"
	I0318 12:44:39.953779    5712 command_runner.go:130] ! I0318 12:38:46.825246       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:39.953838    5712 command_runner.go:130] ! I0318 12:38:46.827225       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600-m03\" does not exist"
	I0318 12:44:39.953838    5712 command_runner.go:130] ! I0318 12:38:46.865011       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-642600-m03" podCIDRs=["10.244.3.0/24"]
	I0318 12:44:39.953838    5712 command_runner.go:130] ! I0318 12:38:47.931681       1 event.go:307] "Event occurred" object="multinode-642600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600-m03 event: Registered Node multinode-642600-m03 in Controller"
	I0318 12:44:39.953838    5712 command_runner.go:130] ! I0318 12:38:52.975724       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:39.953969    5712 command_runner.go:130] ! I0318 12:40:33.280094       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:39.954063    5712 command_runner.go:130] ! I0318 12:40:33.281180       1 event.go:307] "Event occurred" object="multinode-642600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-642600-m03 status is now: NodeNotReady"
	I0318 12:44:39.954129    5712 command_runner.go:130] ! I0318 12:40:33.601041       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-khbjt" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:39.954248    5712 command_runner.go:130] ! I0318 12:40:33.698293       1 event.go:307] "Event occurred" object="kube-system/kindnet-thkjp" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:39.974432    5712 logs.go:123] Gathering logs for container status ...
	I0318 12:44:39.974432    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 12:44:40.071437    5712 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0318 12:44:40.071541    5712 command_runner.go:130] > 566e40ce923f7       8c811b4aec35f                                                                                         4 seconds ago        Running             busybox                   1                   e1b2432b0ed66       busybox-5b5d89c9d6-48qkw
	I0318 12:44:40.071541    5712 command_runner.go:130] > fcf17db92b351       ead0a4a53df89                                                                                         5 seconds ago        Running             coredns                   1                   1090dd5740980       coredns-5dd5756b68-fgn7v
	I0318 12:44:40.071541    5712 command_runner.go:130] > 4652c26c0904e       6e38f40d628db                                                                                         23 seconds ago       Running             storage-provisioner       2                   889c16eb0ab73       storage-provisioner
	I0318 12:44:40.071716    5712 command_runner.go:130] > 9fec05a61d2a9       4950bb10b3f87                                                                                         About a minute ago   Running             kindnet-cni               1                   5ecbdcbdad3fa       kindnet-kpt4f
	I0318 12:44:40.071760    5712 command_runner.go:130] > 787ade2ea2cd0       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   889c16eb0ab73       storage-provisioner
	I0318 12:44:40.071815    5712 command_runner.go:130] > 575b41a3a85a4       83f6cc407eed8                                                                                         About a minute ago   Running             kube-proxy                1                   7a2f0ccaf5c4c       kube-proxy-4dg79
	I0318 12:44:40.071839    5712 command_runner.go:130] > a48a6d310b868       7fe0e6f37db33                                                                                         About a minute ago   Running             kube-apiserver            0                   a7281d6e698ea       kube-apiserver-multinode-642600
	I0318 12:44:40.071929    5712 command_runner.go:130] > 14ae9398d33b1       d058aa5ab969c                                                                                         About a minute ago   Running             kube-controller-manager   1                   eca6768355c74       kube-controller-manager-multinode-642600
	I0318 12:44:40.071954    5712 command_runner.go:130] > bd1e4f4d262e3       e3db313c6dbc0                                                                                         About a minute ago   Running             kube-scheduler            1                   f62197122538f       kube-scheduler-multinode-642600
	I0318 12:44:40.071954    5712 command_runner.go:130] > 8e7911b58c587       73deb9a3f7025                                                                                         About a minute ago   Running             etcd                      0                   67004ee038ee4       etcd-multinode-642600
	I0318 12:44:40.071954    5712 command_runner.go:130] > a8dd2eacb7251       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   21 minutes ago       Exited              busybox                   0                   29bb4d534c2e2       busybox-5b5d89c9d6-48qkw
	I0318 12:44:40.071954    5712 command_runner.go:130] > e81f1d2fdb360       ead0a4a53df89                                                                                         25 minutes ago       Exited              coredns                   0                   ed38da653fbef       coredns-5dd5756b68-fgn7v
	I0318 12:44:40.071954    5712 command_runner.go:130] > 5cf42651cb21d       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              25 minutes ago       Exited              kindnet-cni               0                   fef37141be6db       kindnet-kpt4f
	I0318 12:44:40.071954    5712 command_runner.go:130] > 4bbad08fe59ac       83f6cc407eed8                                                                                         25 minutes ago       Exited              kube-proxy                0                   2f4709a3a45a4       kube-proxy-4dg79
	I0318 12:44:40.071954    5712 command_runner.go:130] > a54be44369019       d058aa5ab969c                                                                                         25 minutes ago       Exited              kube-controller-manager   0                   d766c4514f0bf       kube-controller-manager-multinode-642600
	I0318 12:44:40.071954    5712 command_runner.go:130] > 47777d4c0b90d       e3db313c6dbc0                                                                                         25 minutes ago       Exited              kube-scheduler            0                   3500a9f1ca84e       kube-scheduler-multinode-642600
	I0318 12:44:40.074384    5712 logs.go:123] Gathering logs for kubelet ...
	I0318 12:44:40.074437    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 12:44:40.106622    5712 command_runner.go:130] > Mar 18 12:43:20 multinode-642600 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0318 12:44:40.106622    5712 command_runner.go:130] > Mar 18 12:43:20 multinode-642600 kubelet[1388]: I0318 12:43:20.841405    1388 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0318 12:44:40.106622    5712 command_runner.go:130] > Mar 18 12:43:20 multinode-642600 kubelet[1388]: I0318 12:43:20.841736    1388 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:40.106622    5712 command_runner.go:130] > Mar 18 12:43:20 multinode-642600 kubelet[1388]: I0318 12:43:20.842325    1388 server.go:895] "Client rotation is on, will bootstrap in background"
	I0318 12:44:40.106728    5712 command_runner.go:130] > Mar 18 12:43:20 multinode-642600 kubelet[1388]: E0318 12:43:20.842583    1388 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0318 12:44:40.106728    5712 command_runner.go:130] > Mar 18 12:43:20 multinode-642600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0318 12:44:40.106728    5712 command_runner.go:130] > Mar 18 12:43:20 multinode-642600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 kubelet[1445]: I0318 12:43:21.629315    1445 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 kubelet[1445]: I0318 12:43:21.629808    1445 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 kubelet[1445]: I0318 12:43:21.631096    1445 server.go:895] "Client rotation is on, will bootstrap in background"
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 kubelet[1445]: E0318 12:43:21.631229    1445 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:23 multinode-642600 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.100950    1523 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.101311    1523 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.101646    1523 server.go:895] "Client rotation is on, will bootstrap in background"
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.108175    1523 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.123413    1523 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.204504    1523 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.205069    1523 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.205344    1523 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","To
pologyManagerPolicyOptions":null}
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.205667    1523 topology_manager.go:138] "Creating topology manager with none policy"
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.205685    1523 container_manager_linux.go:301] "Creating device plugin manager"
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.206240    1523 state_mem.go:36] "Initialized new in-memory state store"
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.208674    1523 kubelet.go:393] "Attempting to sync node with API server"
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.208817    1523 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.209351    1523 kubelet.go:309] "Adding apiserver pod source"
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.209491    1523 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0318 12:44:40.106787    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: W0318 12:43:24.212857    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-642600&limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:40.107366    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.213311    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-642600&limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:40.107366    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: W0318 12:43:24.219866    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:40.107366    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.220057    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:40.107366    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.240215    1523 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="docker" version="25.0.4" apiVersion="v1"
	I0318 12:44:40.107366    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: W0318 12:43:24.245761    1523 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0318 12:44:40.107551    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.248742    1523 server.go:1232] "Started kubelet"
	I0318 12:44:40.107551    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.249814    1523 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
	I0318 12:44:40.107551    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.251561    1523 server.go:462] "Adding debug handlers to kubelet server"
	I0318 12:44:40.107551    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.254285    1523 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10
	I0318 12:44:40.107551    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.255480    1523 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0318 12:44:40.107748    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.255659    1523 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"multinode-642600.17bddc6f5820f7a9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-642600", UID:"multinode-642600", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"multinode-642600"}, FirstTimestamp:time.Date(2024, ti
me.March, 18, 12, 43, 24, 248692649, time.Local), LastTimestamp:time.Date(2024, time.March, 18, 12, 43, 24, 248692649, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"multinode-642600"}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 172.25.148.129:8443: connect: connection refused'(may retry after sleeping)
	I0318 12:44:40.107748    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.259469    1523 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0318 12:44:40.107748    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.261490    1523 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0318 12:44:40.107748    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.265275    1523 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.270368    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-642600?timeout=10s\": dial tcp 172.25.148.129:8443: connect: connection refused" interval="200ms"
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: W0318 12:43:24.275611    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.275814    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.317069    1523 reconciler_new.go:29] "Reconciler: start to sync state"
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.327943    1523 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.327963    1523 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.327985    1523 state_mem.go:36] "Initialized new in-memory state store"
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.329007    1523 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.329047    1523 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.329057    1523 policy_none.go:49] "None policy: Start"
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.336597    1523 memory_manager.go:169] "Starting memorymanager" policy="None"
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.336631    1523 state_mem.go:35] "Initializing new in-memory state store"
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.337548    1523 state_mem.go:75] "Updated machine memory state"
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.345495    1523 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.348154    1523 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.351399    1523 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.355603    1523 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.356232    1523 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.357037    1523 kubelet.go:2303] "Starting kubelet main sync loop"
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.359069    1523 kubelet.go:2327] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: W0318 12:43:24.367050    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.367230    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.387242    1523 iptables.go:575] "Could not set up iptables canary" err=<
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0318 12:44:40.107819    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0318 12:44:40.108352    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0318 12:44:40.108352    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0318 12:44:40.108352    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.387428    1523 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-642600\" not found"
	I0318 12:44:40.108352    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.399151    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-642600"
	I0318 12:44:40.108352    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.399841    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.148.129:8443: connect: connection refused" node="multinode-642600"
	I0318 12:44:40.108352    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.460339    1523 topology_manager.go:215] "Topology Admit Handler" podUID="d5f09afee1a6ef36657c1ae3335ddda6" podNamespace="kube-system" podName="etcd-multinode-642600"
	I0318 12:44:40.108483    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.472389    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-642600?timeout=10s\": dial tcp 172.25.148.129:8443: connect: connection refused" interval="400ms"
	I0318 12:44:40.108544    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.474475    1523 topology_manager.go:215] "Topology Admit Handler" podUID="624de65f019baf96d4a0e2fb6064e413" podNamespace="kube-system" podName="kube-apiserver-multinode-642600"
	I0318 12:44:40.108544    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.487469    1523 topology_manager.go:215] "Topology Admit Handler" podUID="a1608bc774d0b3e96e1b6fbbded5cb52" podNamespace="kube-system" podName="kube-controller-manager-multinode-642600"
	I0318 12:44:40.108544    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.500311    1523 topology_manager.go:215] "Topology Admit Handler" podUID="cf50844b540be8ed0b3e767db413ac8f" podNamespace="kube-system" podName="kube-scheduler-multinode-642600"
	I0318 12:44:40.108544    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.527553    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/d5f09afee1a6ef36657c1ae3335ddda6-etcd-certs\") pod \"etcd-multinode-642600\" (UID: \"d5f09afee1a6ef36657c1ae3335ddda6\") " pod="kube-system/etcd-multinode-642600"
	I0318 12:44:40.108544    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.527604    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/d5f09afee1a6ef36657c1ae3335ddda6-etcd-data\") pod \"etcd-multinode-642600\" (UID: \"d5f09afee1a6ef36657c1ae3335ddda6\") " pod="kube-system/etcd-multinode-642600"
	I0318 12:44:40.108544    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.534726    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed38da653fbefea9aeb0ebdb91f985394a7a792571704a4875018f5a6bc9abda"
	I0318 12:44:40.108544    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.534857    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d766c4514f0bf79902b72d04d9e3a09fc2bcf5ef330f41cd3e84e63f5151f2b6"
	I0318 12:44:40.108544    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.534873    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f100b1062a56929e04e6e4377055b065d93a28c504f060cce4695165a2c33db0"
	I0318 12:44:40.108544    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.534885    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a9b4c05a5ccd5364b8dac2797803c98520c4f98df0fba77af7521af64a15152"
	I0318 12:44:40.108544    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.534943    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f4709a3a45a45f0c67f457df8bb202ea2867cfedeaec4a164509190df13f510"
	I0318 12:44:40.108544    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.534961    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3500a9f1ca84ed3d58cdd473a0c7c47a59643858c05dfd90247a09b1b43302bd"
	I0318 12:44:40.108544    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.552869    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aad98ae0cd7c7708c7e02f0b23fc33f1ca2b404bd7fec324c21beefcbe17d009"
	I0318 12:44:40.108544    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.571969    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29bb4d534c2e2b00dfe907d4443637851e3c3348e31bf00939cd6efad71c4e2e"
	I0318 12:44:40.108544    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.589127    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fef37141be6db2ba71fd0f1d2feee00d6ab5d31d607323e4f5ffab4a3e70cfa5"
	I0318 12:44:40.108544    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.614112    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-642600"
	I0318 12:44:40.108544    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.616006    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.148.129:8443: connect: connection refused" node="multinode-642600"
	I0318 12:44:40.108544    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629143    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a1608bc774d0b3e96e1b6fbbded5cb52-flexvolume-dir\") pod \"kube-controller-manager-multinode-642600\" (UID: \"a1608bc774d0b3e96e1b6fbbded5cb52\") " pod="kube-system/kube-controller-manager-multinode-642600"
	I0318 12:44:40.108544    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629404    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a1608bc774d0b3e96e1b6fbbded5cb52-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-642600\" (UID: \"a1608bc774d0b3e96e1b6fbbded5cb52\") " pod="kube-system/kube-controller-manager-multinode-642600"
	I0318 12:44:40.108544    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629689    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/624de65f019baf96d4a0e2fb6064e413-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-642600\" (UID: \"624de65f019baf96d4a0e2fb6064e413\") " pod="kube-system/kube-apiserver-multinode-642600"
	I0318 12:44:40.108544    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629754    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a1608bc774d0b3e96e1b6fbbded5cb52-ca-certs\") pod \"kube-controller-manager-multinode-642600\" (UID: \"a1608bc774d0b3e96e1b6fbbded5cb52\") " pod="kube-system/kube-controller-manager-multinode-642600"
	I0318 12:44:40.109121    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629780    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a1608bc774d0b3e96e1b6fbbded5cb52-k8s-certs\") pod \"kube-controller-manager-multinode-642600\" (UID: \"a1608bc774d0b3e96e1b6fbbded5cb52\") " pod="kube-system/kube-controller-manager-multinode-642600"
	I0318 12:44:40.109121    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629802    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1608bc774d0b3e96e1b6fbbded5cb52-kubeconfig\") pod \"kube-controller-manager-multinode-642600\" (UID: \"a1608bc774d0b3e96e1b6fbbded5cb52\") " pod="kube-system/kube-controller-manager-multinode-642600"
	I0318 12:44:40.109350    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629825    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cf50844b540be8ed0b3e767db413ac8f-kubeconfig\") pod \"kube-scheduler-multinode-642600\" (UID: \"cf50844b540be8ed0b3e767db413ac8f\") " pod="kube-system/kube-scheduler-multinode-642600"
	I0318 12:44:40.109427    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629860    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/624de65f019baf96d4a0e2fb6064e413-ca-certs\") pod \"kube-apiserver-multinode-642600\" (UID: \"624de65f019baf96d4a0e2fb6064e413\") " pod="kube-system/kube-apiserver-multinode-642600"
	I0318 12:44:40.109501    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629919    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/624de65f019baf96d4a0e2fb6064e413-k8s-certs\") pod \"kube-apiserver-multinode-642600\" (UID: \"624de65f019baf96d4a0e2fb6064e413\") " pod="kube-system/kube-apiserver-multinode-642600"
	I0318 12:44:40.109501    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.875125    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-642600?timeout=10s\": dial tcp 172.25.148.129:8443: connect: connection refused" interval="800ms"
	I0318 12:44:40.109501    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: I0318 12:43:25.030740    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-642600"
	I0318 12:44:40.109501    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: E0318 12:43:25.031776    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.148.129:8443: connect: connection refused" node="multinode-642600"
	I0318 12:44:40.109501    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: W0318 12:43:25.266849    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:40.109501    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: E0318 12:43:25.266980    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:40.109501    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: I0318 12:43:25.674768    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7281d6e698ea2dc42d7d3093ccde32b770bf8367fdb58230694380f40daeb9f"
	I0318 12:44:40.109501    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: E0318 12:43:25.676706    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-642600?timeout=10s\": dial tcp 172.25.148.129:8443: connect: connection refused" interval="1.6s"
	I0318 12:44:40.109501    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: I0318 12:43:25.692553    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eca6768355c74817c50b811b96b5fcc93a181c4968c53d4d4b0d0252ff6dbd0a"
	I0318 12:44:40.109501    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: W0318 12:43:25.700976    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:40.109501    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: E0318 12:43:25.701062    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:40.109501    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: I0318 12:43:25.708111    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f62197122538f83943df8b19710794ea6ea9a9ffa884082a1a62435e9b152c3f"
	I0318 12:44:40.109501    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: W0318 12:43:25.731607    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:40.109501    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: E0318 12:43:25.731695    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:40.109501    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: W0318 12:43:25.790774    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-642600&limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:40.110034    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: E0318 12:43:25.790867    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-642600&limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:40.110034    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: I0318 12:43:25.868581    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-642600"
	I0318 12:44:40.110034    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: E0318 12:43:25.869663    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.148.129:8443: connect: connection refused" node="multinode-642600"
	I0318 12:44:40.110265    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 kubelet[1523]: E0318 12:43:26.129309    1523 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"multinode-642600.17bddc6f5820f7a9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-642600", UID:"multinode-642600", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"multinode-642600"}, FirstTimestamp:time.Date(2024, ti
me.March, 18, 12, 43, 24, 248692649, time.Local), LastTimestamp:time.Date(2024, time.March, 18, 12, 43, 24, 248692649, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"multinode-642600"}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 172.25.148.129:8443: connect: connection refused'(may retry after sleeping)
	I0318 12:44:40.110265    5712 command_runner.go:130] > Mar 18 12:43:27 multinode-642600 kubelet[1523]: I0318 12:43:27.488157    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-642600"
	I0318 12:44:40.110265    5712 command_runner.go:130] > Mar 18 12:43:30 multinode-642600 kubelet[1523]: I0318 12:43:30.626198    1523 kubelet_node_status.go:108] "Node was previously registered" node="multinode-642600"
	I0318 12:44:40.110265    5712 command_runner.go:130] > Mar 18 12:43:30 multinode-642600 kubelet[1523]: I0318 12:43:30.626989    1523 kubelet_node_status.go:73] "Successfully registered node" node="multinode-642600"
	I0318 12:44:40.110265    5712 command_runner.go:130] > Mar 18 12:43:30 multinode-642600 kubelet[1523]: I0318 12:43:30.640050    1523 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0318 12:44:40.110265    5712 command_runner.go:130] > Mar 18 12:43:30 multinode-642600 kubelet[1523]: I0318 12:43:30.642279    1523 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0318 12:44:40.110265    5712 command_runner.go:130] > Mar 18 12:43:30 multinode-642600 kubelet[1523]: I0318 12:43:30.658382    1523 setters.go:552] "Node became not ready" node="multinode-642600" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-03-18T12:43:30Z","lastTransitionTime":"2024-03-18T12:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0318 12:44:40.110411    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.223393    1523 apiserver.go:52] "Watching apiserver"
	I0318 12:44:40.110411    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.230566    1523 topology_manager.go:215] "Topology Admit Handler" podUID="acd9d7a0-0e27-4bbb-8562-6fbf374742ca" podNamespace="kube-system" podName="kindnet-kpt4f"
	I0318 12:44:40.110411    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.231421    1523 topology_manager.go:215] "Topology Admit Handler" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b" podNamespace="kube-system" podName="coredns-5dd5756b68-fgn7v"
	I0318 12:44:40.110534    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.231644    1523 topology_manager.go:215] "Topology Admit Handler" podUID="449242c2-ad12-4da5-b339-3be7ab8a9b16" podNamespace="kube-system" podName="kube-proxy-4dg79"
	I0318 12:44:40.110534    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.231779    1523 topology_manager.go:215] "Topology Admit Handler" podUID="d2718b8a-26a9-4c86-bf9a-221d1ee23ceb" podNamespace="kube-system" podName="storage-provisioner"
	I0318 12:44:40.110534    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.231939    1523 topology_manager.go:215] "Topology Admit Handler" podUID="45969c0e-ac43-459e-95c0-86f7b76947db" podNamespace="default" podName="busybox-5b5d89c9d6-48qkw"
	I0318 12:44:40.110650    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.232191    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:40.110740    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.233435    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:40.110768    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.235227    1523 kubelet.go:1872] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-642600" podUID="4aa98cb9-f6ab-40b3-8c15-235ba4e09985"
	I0318 12:44:40.110768    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.236365    1523 kubelet.go:1872] "Trying to delete pod" pod="kube-system/etcd-multinode-642600" podUID="237133d7-6f1a-42ee-8cf2-a2d7564d67fc"
	I0318 12:44:40.110768    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.266715    1523 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	I0318 12:44:40.110865    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.289094    1523 kubelet.go:1877] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-642600"
	I0318 12:44:40.110893    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.301996    1523 kubelet.go:1877] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-642600"
	I0318 12:44:40.110893    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.322408    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/449242c2-ad12-4da5-b339-3be7ab8a9b16-lib-modules\") pod \"kube-proxy-4dg79\" (UID: \"449242c2-ad12-4da5-b339-3be7ab8a9b16\") " pod="kube-system/kube-proxy-4dg79"
	I0318 12:44:40.111002    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.322793    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/acd9d7a0-0e27-4bbb-8562-6fbf374742ca-xtables-lock\") pod \"kindnet-kpt4f\" (UID: \"acd9d7a0-0e27-4bbb-8562-6fbf374742ca\") " pod="kube-system/kindnet-kpt4f"
	I0318 12:44:40.111002    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.323081    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d2718b8a-26a9-4c86-bf9a-221d1ee23ceb-tmp\") pod \"storage-provisioner\" (UID: \"d2718b8a-26a9-4c86-bf9a-221d1ee23ceb\") " pod="kube-system/storage-provisioner"
	I0318 12:44:40.111064    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.323213    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/acd9d7a0-0e27-4bbb-8562-6fbf374742ca-cni-cfg\") pod \"kindnet-kpt4f\" (UID: \"acd9d7a0-0e27-4bbb-8562-6fbf374742ca\") " pod="kube-system/kindnet-kpt4f"
	I0318 12:44:40.111118    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.323245    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/449242c2-ad12-4da5-b339-3be7ab8a9b16-xtables-lock\") pod \"kube-proxy-4dg79\" (UID: \"449242c2-ad12-4da5-b339-3be7ab8a9b16\") " pod="kube-system/kube-proxy-4dg79"
	I0318 12:44:40.111118    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.323294    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/acd9d7a0-0e27-4bbb-8562-6fbf374742ca-lib-modules\") pod \"kindnet-kpt4f\" (UID: \"acd9d7a0-0e27-4bbb-8562-6fbf374742ca\") " pod="kube-system/kindnet-kpt4f"
	I0318 12:44:40.111118    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.324469    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 12:44:40.111118    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.324580    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume podName:7bc52797-b4bd-4046-b3d5-fae9c8ccd13b nodeName:}" failed. No retries permitted until 2024-03-18 12:43:31.824540428 +0000 UTC m=+7.835780164 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume") pod "coredns-5dd5756b68-fgn7v" (UID: "7bc52797-b4bd-4046-b3d5-fae9c8ccd13b") : object "kube-system"/"coredns" not registered
	I0318 12:44:40.111118    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.339515    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:40.111118    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.339554    1523 projected.go:198] Error preparing data for projected volume kube-api-access-9g8n5 for pod default/busybox-5b5d89c9d6-48qkw: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:40.111118    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.339661    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5 podName:45969c0e-ac43-459e-95c0-86f7b76947db nodeName:}" failed. No retries permitted until 2024-03-18 12:43:31.839645304 +0000 UTC m=+7.850885040 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9g8n5" (UniqueName: "kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5") pod "busybox-5b5d89c9d6-48qkw" (UID: "45969c0e-ac43-459e-95c0-86f7b76947db") : object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:40.111118    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.384452    1523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-multinode-642600" podStartSLOduration=0.384368133 podCreationTimestamp="2024-03-18 12:43:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-18 12:43:31.360389871 +0000 UTC m=+7.371629607" watchObservedRunningTime="2024-03-18 12:43:31.384368133 +0000 UTC m=+7.395607769"
	I0318 12:44:40.111118    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.431280    1523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-642600" podStartSLOduration=0.431225058 podCreationTimestamp="2024-03-18 12:43:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-18 12:43:31.388015127 +0000 UTC m=+7.399254863" watchObservedRunningTime="2024-03-18 12:43:31.431225058 +0000 UTC m=+7.442464794"
	I0318 12:44:40.111118    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.828430    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 12:44:40.111118    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.828605    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume podName:7bc52797-b4bd-4046-b3d5-fae9c8ccd13b nodeName:}" failed. No retries permitted until 2024-03-18 12:43:32.828568222 +0000 UTC m=+8.839807858 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume") pod "coredns-5dd5756b68-fgn7v" (UID: "7bc52797-b4bd-4046-b3d5-fae9c8ccd13b") : object "kube-system"/"coredns" not registered
	I0318 12:44:40.111118    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.930285    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:40.111118    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.930420    1523 projected.go:198] Error preparing data for projected volume kube-api-access-9g8n5 for pod default/busybox-5b5d89c9d6-48qkw: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:40.111118    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.930532    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5 podName:45969c0e-ac43-459e-95c0-86f7b76947db nodeName:}" failed. No retries permitted until 2024-03-18 12:43:32.930496159 +0000 UTC m=+8.941735795 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-9g8n5" (UniqueName: "kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5") pod "busybox-5b5d89c9d6-48qkw" (UID: "45969c0e-ac43-459e-95c0-86f7b76947db") : object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:40.111118    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: I0318 12:43:32.133795    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="889c16eb0ab731956d02a28d0337dc6ff349dc574ba10d4fc1a939fb2e09d6d3"
	I0318 12:44:40.111118    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: I0318 12:43:32.147805    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a2f0ccaf5c4c6c0019124eda20c358dfa8aa20f0c92ade10aa3de32608e3527"
	I0318 12:44:40.111649    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: I0318 12:43:32.369742    1523 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d04d3e415061983b742e6c14f1a5f562" path="/var/lib/kubelet/pods/d04d3e415061983b742e6c14f1a5f562/volumes"
	I0318 12:44:40.111649    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: I0318 12:43:32.371223    1523 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ec96a596e22f5afedbd92a854d1b8bec" path="/var/lib/kubelet/pods/ec96a596e22f5afedbd92a854d1b8bec/volumes"
	I0318 12:44:40.111649    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: I0318 12:43:32.628360    1523 kubelet.go:1872] "Trying to delete pod" pod="kube-system/etcd-multinode-642600" podUID="237133d7-6f1a-42ee-8cf2-a2d7564d67fc"
	I0318 12:44:40.111754    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: I0318 12:43:32.628590    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ecbdcbdad3fa79af8ef70896ae67d65b14c47b5811078c5d6d167e0f295d1bc"
	I0318 12:44:40.111754    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: E0318 12:43:32.836390    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 12:44:40.111851    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: E0318 12:43:32.836523    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume podName:7bc52797-b4bd-4046-b3d5-fae9c8ccd13b nodeName:}" failed. No retries permitted until 2024-03-18 12:43:34.836498609 +0000 UTC m=+10.847738345 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume") pod "coredns-5dd5756b68-fgn7v" (UID: "7bc52797-b4bd-4046-b3d5-fae9c8ccd13b") : object "kube-system"/"coredns" not registered
	I0318 12:44:40.111851    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: E0318 12:43:32.937295    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:40.111851    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: E0318 12:43:32.937349    1523 projected.go:198] Error preparing data for projected volume kube-api-access-9g8n5 for pod default/busybox-5b5d89c9d6-48qkw: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:40.111851    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: E0318 12:43:32.937443    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5 podName:45969c0e-ac43-459e-95c0-86f7b76947db nodeName:}" failed. No retries permitted until 2024-03-18 12:43:34.937423048 +0000 UTC m=+10.948662684 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-9g8n5" (UniqueName: "kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5") pod "busybox-5b5d89c9d6-48qkw" (UID: "45969c0e-ac43-459e-95c0-86f7b76947db") : object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:40.112041    5712 command_runner.go:130] > Mar 18 12:43:33 multinode-642600 kubelet[1523]: E0318 12:43:33.359564    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:40.112041    5712 command_runner.go:130] > Mar 18 12:43:33 multinode-642600 kubelet[1523]: E0318 12:43:33.359732    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:40.112140    5712 command_runner.go:130] > Mar 18 12:43:34 multinode-642600 kubelet[1523]: E0318 12:43:34.409996    1523 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0318 12:44:40.112140    5712 command_runner.go:130] > Mar 18 12:43:34 multinode-642600 kubelet[1523]: E0318 12:43:34.855132    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 12:44:40.112140    5712 command_runner.go:130] > Mar 18 12:43:34 multinode-642600 kubelet[1523]: E0318 12:43:34.855288    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume podName:7bc52797-b4bd-4046-b3d5-fae9c8ccd13b nodeName:}" failed. No retries permitted until 2024-03-18 12:43:38.85526758 +0000 UTC m=+14.866507216 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume") pod "coredns-5dd5756b68-fgn7v" (UID: "7bc52797-b4bd-4046-b3d5-fae9c8ccd13b") : object "kube-system"/"coredns" not registered
	I0318 12:44:40.112248    5712 command_runner.go:130] > Mar 18 12:43:34 multinode-642600 kubelet[1523]: E0318 12:43:34.955668    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:40.112248    5712 command_runner.go:130] > Mar 18 12:43:34 multinode-642600 kubelet[1523]: E0318 12:43:34.955718    1523 projected.go:198] Error preparing data for projected volume kube-api-access-9g8n5 for pod default/busybox-5b5d89c9d6-48qkw: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:40.112335    5712 command_runner.go:130] > Mar 18 12:43:34 multinode-642600 kubelet[1523]: E0318 12:43:34.955777    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5 podName:45969c0e-ac43-459e-95c0-86f7b76947db nodeName:}" failed. No retries permitted until 2024-03-18 12:43:38.955759519 +0000 UTC m=+14.966999155 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-9g8n5" (UniqueName: "kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5") pod "busybox-5b5d89c9d6-48qkw" (UID: "45969c0e-ac43-459e-95c0-86f7b76947db") : object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:40.112335    5712 command_runner.go:130] > Mar 18 12:43:35 multinode-642600 kubelet[1523]: E0318 12:43:35.360249    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:40.112335    5712 command_runner.go:130] > Mar 18 12:43:35 multinode-642600 kubelet[1523]: E0318 12:43:35.360337    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:40.112335    5712 command_runner.go:130] > Mar 18 12:43:37 multinode-642600 kubelet[1523]: E0318 12:43:37.360005    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:40.112508    5712 command_runner.go:130] > Mar 18 12:43:37 multinode-642600 kubelet[1523]: E0318 12:43:37.360005    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:40.112508    5712 command_runner.go:130] > Mar 18 12:43:38 multinode-642600 kubelet[1523]: E0318 12:43:38.890447    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 12:44:40.112508    5712 command_runner.go:130] > Mar 18 12:43:38 multinode-642600 kubelet[1523]: E0318 12:43:38.890642    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume podName:7bc52797-b4bd-4046-b3d5-fae9c8ccd13b nodeName:}" failed. No retries permitted until 2024-03-18 12:43:46.890560586 +0000 UTC m=+22.901800222 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume") pod "coredns-5dd5756b68-fgn7v" (UID: "7bc52797-b4bd-4046-b3d5-fae9c8ccd13b") : object "kube-system"/"coredns" not registered
	I0318 12:44:40.112508    5712 command_runner.go:130] > Mar 18 12:43:38 multinode-642600 kubelet[1523]: E0318 12:43:38.991640    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:40.112780    5712 command_runner.go:130] > Mar 18 12:43:38 multinode-642600 kubelet[1523]: E0318 12:43:38.991754    1523 projected.go:198] Error preparing data for projected volume kube-api-access-9g8n5 for pod default/busybox-5b5d89c9d6-48qkw: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:40.112780    5712 command_runner.go:130] > Mar 18 12:43:38 multinode-642600 kubelet[1523]: E0318 12:43:38.991856    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5 podName:45969c0e-ac43-459e-95c0-86f7b76947db nodeName:}" failed. No retries permitted until 2024-03-18 12:43:46.991836746 +0000 UTC m=+23.003076482 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-9g8n5" (UniqueName: "kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5") pod "busybox-5b5d89c9d6-48qkw" (UID: "45969c0e-ac43-459e-95c0-86f7b76947db") : object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:40.112884    5712 command_runner.go:130] > Mar 18 12:43:39 multinode-642600 kubelet[1523]: E0318 12:43:39.360236    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:40.112884    5712 command_runner.go:130] > Mar 18 12:43:39 multinode-642600 kubelet[1523]: E0318 12:43:39.360508    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:40.112991    5712 command_runner.go:130] > Mar 18 12:43:39 multinode-642600 kubelet[1523]: E0318 12:43:39.425235    1523 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0318 12:44:40.112991    5712 command_runner.go:130] > Mar 18 12:43:41 multinode-642600 kubelet[1523]: E0318 12:43:41.360362    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:40.113078    5712 command_runner.go:130] > Mar 18 12:43:41 multinode-642600 kubelet[1523]: E0318 12:43:41.360863    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:40.113198    5712 command_runner.go:130] > Mar 18 12:43:43 multinode-642600 kubelet[1523]: E0318 12:43:43.359722    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:40.113198    5712 command_runner.go:130] > Mar 18 12:43:43 multinode-642600 kubelet[1523]: E0318 12:43:43.360308    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:40.113198    5712 command_runner.go:130] > Mar 18 12:43:44 multinode-642600 kubelet[1523]: E0318 12:43:44.438590    1523 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0318 12:44:40.113198    5712 command_runner.go:130] > Mar 18 12:43:45 multinode-642600 kubelet[1523]: E0318 12:43:45.360026    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:40.113198    5712 command_runner.go:130] > Mar 18 12:43:45 multinode-642600 kubelet[1523]: E0318 12:43:45.360101    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:40.113198    5712 command_runner.go:130] > Mar 18 12:43:46 multinode-642600 kubelet[1523]: E0318 12:43:46.970368    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 12:44:40.113198    5712 command_runner.go:130] > Mar 18 12:43:46 multinode-642600 kubelet[1523]: E0318 12:43:46.970583    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume podName:7bc52797-b4bd-4046-b3d5-fae9c8ccd13b nodeName:}" failed. No retries permitted until 2024-03-18 12:44:02.970562522 +0000 UTC m=+38.981802258 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume") pod "coredns-5dd5756b68-fgn7v" (UID: "7bc52797-b4bd-4046-b3d5-fae9c8ccd13b") : object "kube-system"/"coredns" not registered
	I0318 12:44:40.113198    5712 command_runner.go:130] > Mar 18 12:43:47 multinode-642600 kubelet[1523]: E0318 12:43:47.071352    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:40.113198    5712 command_runner.go:130] > Mar 18 12:43:47 multinode-642600 kubelet[1523]: E0318 12:43:47.071390    1523 projected.go:198] Error preparing data for projected volume kube-api-access-9g8n5 for pod default/busybox-5b5d89c9d6-48qkw: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:40.113198    5712 command_runner.go:130] > Mar 18 12:43:47 multinode-642600 kubelet[1523]: E0318 12:43:47.071448    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5 podName:45969c0e-ac43-459e-95c0-86f7b76947db nodeName:}" failed. No retries permitted until 2024-03-18 12:44:03.071430219 +0000 UTC m=+39.082669855 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-9g8n5" (UniqueName: "kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5") pod "busybox-5b5d89c9d6-48qkw" (UID: "45969c0e-ac43-459e-95c0-86f7b76947db") : object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:40.113198    5712 command_runner.go:130] > Mar 18 12:43:47 multinode-642600 kubelet[1523]: E0318 12:43:47.359847    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:40.113198    5712 command_runner.go:130] > Mar 18 12:43:47 multinode-642600 kubelet[1523]: E0318 12:43:47.360318    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:40.113774    5712 command_runner.go:130] > Mar 18 12:43:49 multinode-642600 kubelet[1523]: E0318 12:43:49.360074    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:40.113774    5712 command_runner.go:130] > Mar 18 12:43:49 multinode-642600 kubelet[1523]: E0318 12:43:49.360604    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:40.113774    5712 command_runner.go:130] > Mar 18 12:43:49 multinode-642600 kubelet[1523]: E0318 12:43:49.453099    1523 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0318 12:44:40.113969    5712 command_runner.go:130] > Mar 18 12:43:51 multinode-642600 kubelet[1523]: E0318 12:43:51.360369    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:40.114037    5712 command_runner.go:130] > Mar 18 12:43:51 multinode-642600 kubelet[1523]: E0318 12:43:51.361016    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:40.114096    5712 command_runner.go:130] > Mar 18 12:43:53 multinode-642600 kubelet[1523]: E0318 12:43:53.359799    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:40.114096    5712 command_runner.go:130] > Mar 18 12:43:53 multinode-642600 kubelet[1523]: E0318 12:43:53.359935    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:40.114096    5712 command_runner.go:130] > Mar 18 12:43:54 multinode-642600 kubelet[1523]: E0318 12:43:54.467487    1523 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0318 12:44:40.114096    5712 command_runner.go:130] > Mar 18 12:43:55 multinode-642600 kubelet[1523]: E0318 12:43:55.359513    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:40.114096    5712 command_runner.go:130] > Mar 18 12:43:55 multinode-642600 kubelet[1523]: E0318 12:43:55.360047    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:40.114096    5712 command_runner.go:130] > Mar 18 12:43:57 multinode-642600 kubelet[1523]: E0318 12:43:57.359796    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:40.114096    5712 command_runner.go:130] > Mar 18 12:43:57 multinode-642600 kubelet[1523]: E0318 12:43:57.359970    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:40.114096    5712 command_runner.go:130] > Mar 18 12:43:59 multinode-642600 kubelet[1523]: E0318 12:43:59.360327    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:40.114096    5712 command_runner.go:130] > Mar 18 12:43:59 multinode-642600 kubelet[1523]: E0318 12:43:59.360455    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:40.114096    5712 command_runner.go:130] > Mar 18 12:43:59 multinode-642600 kubelet[1523]: E0318 12:43:59.483297    1523 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0318 12:44:40.114096    5712 command_runner.go:130] > Mar 18 12:44:01 multinode-642600 kubelet[1523]: E0318 12:44:01.359691    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:40.114635    5712 command_runner.go:130] > Mar 18 12:44:01 multinode-642600 kubelet[1523]: E0318 12:44:01.360228    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:40.114804    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.032626    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 12:44:40.114896    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.032722    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume podName:7bc52797-b4bd-4046-b3d5-fae9c8ccd13b nodeName:}" failed. No retries permitted until 2024-03-18 12:44:35.0327033 +0000 UTC m=+71.043942936 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume") pod "coredns-5dd5756b68-fgn7v" (UID: "7bc52797-b4bd-4046-b3d5-fae9c8ccd13b") : object "kube-system"/"coredns" not registered
	I0318 12:44:40.114925    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.134727    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:40.114992    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.134857    1523 projected.go:198] Error preparing data for projected volume kube-api-access-9g8n5 for pod default/busybox-5b5d89c9d6-48qkw: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:40.115066    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.135073    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5 podName:45969c0e-ac43-459e-95c0-86f7b76947db nodeName:}" failed. No retries permitted until 2024-03-18 12:44:35.13505028 +0000 UTC m=+71.146289916 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-9g8n5" (UniqueName: "kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5") pod "busybox-5b5d89c9d6-48qkw" (UID: "45969c0e-ac43-459e-95c0-86f7b76947db") : object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:40.115066    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.360260    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:40.115123    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.360354    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:40.115123    5712 command_runner.go:130] > Mar 18 12:44:04 multinode-642600 kubelet[1523]: I0318 12:44:04.124509    1523 scope.go:117] "RemoveContainer" containerID="996fb0f2ade69129acd747fc5146ef4295cc7ebd79cae8e8f881a21393ddb74a"
	I0318 12:44:40.116778    5712 command_runner.go:130] > Mar 18 12:44:04 multinode-642600 kubelet[1523]: I0318 12:44:04.125880    1523 scope.go:117] "RemoveContainer" containerID="787ade2ea2cd019bbcde5523b750659f59b0187d9de4b757fba8a14f9a126460"
	I0318 12:44:40.116778    5712 command_runner.go:130] > Mar 18 12:44:04 multinode-642600 kubelet[1523]: E0318 12:44:04.127355    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d2718b8a-26a9-4c86-bf9a-221d1ee23ceb)\"" pod="kube-system/storage-provisioner" podUID="d2718b8a-26a9-4c86-bf9a-221d1ee23ceb"
	I0318 12:44:40.116778    5712 command_runner.go:130] > Mar 18 12:44:17 multinode-642600 kubelet[1523]: I0318 12:44:17.359956    1523 scope.go:117] "RemoveContainer" containerID="787ade2ea2cd019bbcde5523b750659f59b0187d9de4b757fba8a14f9a126460"
	I0318 12:44:40.116778    5712 command_runner.go:130] > Mar 18 12:44:24 multinode-642600 kubelet[1523]: I0318 12:44:24.325657    1523 scope.go:117] "RemoveContainer" containerID="301c80f8b38cb79f051755af6af0fb604c0eee0689fd1f2d22a66e0969a9583f"
	I0318 12:44:40.116778    5712 command_runner.go:130] > Mar 18 12:44:24 multinode-642600 kubelet[1523]: I0318 12:44:24.374630    1523 scope.go:117] "RemoveContainer" containerID="4b94d396876e5c7e3b8c69b01560d10ad95ff183ab3cc78a194276537cfd6cf5"
	I0318 12:44:40.116778    5712 command_runner.go:130] > Mar 18 12:44:24 multinode-642600 kubelet[1523]: E0318 12:44:24.399375    1523 iptables.go:575] "Could not set up iptables canary" err=<
	I0318 12:44:40.116778    5712 command_runner.go:130] > Mar 18 12:44:24 multinode-642600 kubelet[1523]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0318 12:44:40.116778    5712 command_runner.go:130] > Mar 18 12:44:24 multinode-642600 kubelet[1523]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0318 12:44:40.116778    5712 command_runner.go:130] > Mar 18 12:44:24 multinode-642600 kubelet[1523]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0318 12:44:40.116778    5712 command_runner.go:130] > Mar 18 12:44:24 multinode-642600 kubelet[1523]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0318 12:44:40.116778    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 kubelet[1523]: I0318 12:44:35.962288    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1b2432b0ed66a1175586c13232eb9b9239f18a4f9a86e2a0c5f48c1407fdb14"
	I0318 12:44:40.117318    5712 command_runner.go:130] > Mar 18 12:44:36 multinode-642600 kubelet[1523]: I0318 12:44:36.079817    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1090dd57409807a15613607fd810b67863a9dd9c5a8512d7a6720906641c7f26"
	I0318 12:44:40.161036    5712 logs.go:123] Gathering logs for etcd [8e7911b58c58] ...
	I0318 12:44:40.161036    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7911b58c58"
	I0318 12:44:40.194913    5712 command_runner.go:130] ! {"level":"warn","ts":"2024-03-18T12:43:26.200481Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0318 12:44:40.195613    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.210029Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.25.148.129:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.25.148.129:2380","--initial-cluster=multinode-642600=https://172.25.148.129:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.25.148.129:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.25.148.129:2380","--name=multinode-642600","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0318 12:44:40.195613    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.210181Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0318 12:44:40.195613    5712 command_runner.go:130] ! {"level":"warn","ts":"2024-03-18T12:43:26.21031Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0318 12:44:40.195613    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.210331Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.25.148.129:2380"]}
	I0318 12:44:40.195870    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.210546Z","caller":"embed/etcd.go:495","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0318 12:44:40.195870    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.222773Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.25.148.129:2379"]}
	I0318 12:44:40.195985    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.228178Z","caller":"embed/etcd.go:309","msg":"starting an etcd server","etcd-version":"3.5.9","git-sha":"bdbbde998","go-version":"go1.19.9","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-642600","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.25.148.129:2380"],"listen-peer-urls":["https://172.25.148.129:2380"],"advertise-client-urls":["https://172.25.148.129:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.25.148.129:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"init
ial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0318 12:44:40.195985    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.271498Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"41.739133ms"}
	I0318 12:44:40.196072    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.299465Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0318 12:44:40.196072    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.319578Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"31713adf8492fbc4","local-member-id":"78764271becab2d0","commit-index":2138}
	I0318 12:44:40.196165    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.319995Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 switched to configuration voters=()"}
	I0318 12:44:40.196165    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.320107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 became follower at term 2"}
	I0318 12:44:40.196210    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.320138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 78764271becab2d0 [peers: [], term: 2, commit: 2138, applied: 0, lastindex: 2138, lastterm: 2]"}
	I0318 12:44:40.196210    5712 command_runner.go:130] ! {"level":"warn","ts":"2024-03-18T12:43:26.325366Z","caller":"auth/store.go:1238","msg":"simple token is not cryptographically signed"}
	I0318 12:44:40.196210    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.329191Z","caller":"mvcc/kvstore.go:323","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1388}
	I0318 12:44:40.196294    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.333388Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":1848}
	I0318 12:44:40.196294    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.357951Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0318 12:44:40.196359    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.372436Z","caller":"etcdserver/corrupt.go:95","msg":"starting initial corruption check","local-member-id":"78764271becab2d0","timeout":"7s"}
	I0318 12:44:40.196359    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.373126Z","caller":"etcdserver/corrupt.go:165","msg":"initial corruption checking passed; no corruption","local-member-id":"78764271becab2d0"}
	I0318 12:44:40.196421    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.373252Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"78764271becab2d0","local-server-version":"3.5.9","cluster-version":"to_be_decided"}
	I0318 12:44:40.196421    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.373688Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	I0318 12:44:40.196421    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.375391Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0318 12:44:40.196485    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.375647Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0318 12:44:40.196540    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.375735Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0318 12:44:40.196540    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.377469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 switched to configuration voters=(8680198388102902480)"}
	I0318 12:44:40.196577    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.377568Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"31713adf8492fbc4","local-member-id":"78764271becab2d0","added-peer-id":"78764271becab2d0","added-peer-peer-urls":["https://172.25.151.112:2380"]}
	I0318 12:44:40.196577    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.378749Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"31713adf8492fbc4","local-member-id":"78764271becab2d0","cluster-version":"3.5"}
	I0318 12:44:40.196658    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.378942Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0318 12:44:40.196719    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.380244Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0318 12:44:40.196747    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.380886Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"78764271becab2d0","initial-advertise-peer-urls":["https://172.25.148.129:2380"],"listen-peer-urls":["https://172.25.148.129:2380"],"advertise-client-urls":["https://172.25.148.129:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.25.148.129:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0318 12:44:40.196747    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.383141Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.25.148.129:2380"}
	I0318 12:44:40.196747    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.383279Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.25.148.129:2380"}
	I0318 12:44:40.196747    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.393018Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0318 12:44:40.196747    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.621966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 is starting a new election at term 2"}
	I0318 12:44:40.196747    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.622399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 became pre-candidate at term 2"}
	I0318 12:44:40.196747    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.622624Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 received MsgPreVoteResp from 78764271becab2d0 at term 2"}
	I0318 12:44:40.196747    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.622825Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 became candidate at term 3"}
	I0318 12:44:40.196747    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.624231Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 received MsgVoteResp from 78764271becab2d0 at term 3"}
	I0318 12:44:40.196747    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.624426Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 became leader at term 3"}
	I0318 12:44:40.196747    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.624696Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 78764271becab2d0 elected leader 78764271becab2d0 at term 3"}
	I0318 12:44:40.196747    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.641347Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"78764271becab2d0","local-member-attributes":"{Name:multinode-642600 ClientURLs:[https://172.25.148.129:2379]}","request-path":"/0/members/78764271becab2d0/attributes","cluster-id":"31713adf8492fbc4","publish-timeout":"7s"}
	I0318 12:44:40.196747    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.641882Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0318 12:44:40.196747    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.64409Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0318 12:44:40.196747    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.644373Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0318 12:44:40.196747    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.641995Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0318 12:44:40.196747    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.650212Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.25.148.129:2379"}
	I0318 12:44:40.196747    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.651053Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0318 12:44:40.204567    5712 logs.go:123] Gathering logs for coredns [fcf17db92b35] ...
	I0318 12:44:40.204567    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcf17db92b35"
	I0318 12:44:40.232995    5712 command_runner.go:130] > .:53
	I0318 12:44:40.233913    5712 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 07d6393480c36cc6b464d3853a5e32028517fcba50e93adef34ce624ca099b3a1e269a86e99bf5086a15610de9e11b2980c233f8d3dcbff38f702488f0fd5328
	I0318 12:44:40.233913    5712 command_runner.go:130] > CoreDNS-1.10.1
	I0318 12:44:40.233913    5712 command_runner.go:130] > linux/amd64, go1.20, 055b2c3
	I0318 12:44:40.233913    5712 command_runner.go:130] > [INFO] 127.0.0.1:53681 - 55845 "HINFO IN 162544917519141994.8165783507281513505. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.028223444s
	I0318 12:44:40.234132    5712 logs.go:123] Gathering logs for coredns [e81f1d2fdb36] ...
	I0318 12:44:40.234279    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e81f1d2fdb36"
	I0318 12:44:40.266860    5712 command_runner.go:130] > .:53
	I0318 12:44:40.266981    5712 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 07d6393480c36cc6b464d3853a5e32028517fcba50e93adef34ce624ca099b3a1e269a86e99bf5086a15610de9e11b2980c233f8d3dcbff38f702488f0fd5328
	I0318 12:44:40.266981    5712 command_runner.go:130] > CoreDNS-1.10.1
	I0318 12:44:40.266981    5712 command_runner.go:130] > linux/amd64, go1.20, 055b2c3
	I0318 12:44:40.266981    5712 command_runner.go:130] > [INFO] 127.0.0.1:48183 - 41539 "HINFO IN 767578685007701398.8900982300391989616. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.040167772s
	I0318 12:44:40.266981    5712 command_runner.go:130] > [INFO] 10.244.0.3:56190 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000320901s
	I0318 12:44:40.266981    5712 command_runner.go:130] > [INFO] 10.244.0.3:43050 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.04023503s
	I0318 12:44:40.267134    5712 command_runner.go:130] > [INFO] 10.244.0.3:47302 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.158419612s
	I0318 12:44:40.267191    5712 command_runner.go:130] > [INFO] 10.244.0.3:37199 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.162590352s
	I0318 12:44:40.267191    5712 command_runner.go:130] > [INFO] 10.244.1.2:48003 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000216101s
	I0318 12:44:40.267191    5712 command_runner.go:130] > [INFO] 10.244.1.2:48857 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000380201s
	I0318 12:44:40.267296    5712 command_runner.go:130] > [INFO] 10.244.1.2:52412 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000070401s
	I0318 12:44:40.267296    5712 command_runner.go:130] > [INFO] 10.244.1.2:59362 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000071801s
	I0318 12:44:40.267296    5712 command_runner.go:130] > [INFO] 10.244.0.3:38833 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000250501s
	I0318 12:44:40.267296    5712 command_runner.go:130] > [INFO] 10.244.0.3:34860 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.064163607s
	I0318 12:44:40.267296    5712 command_runner.go:130] > [INFO] 10.244.0.3:45210 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000227601s
	I0318 12:44:40.267296    5712 command_runner.go:130] > [INFO] 10.244.0.3:32804 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001229s
	I0318 12:44:40.267376    5712 command_runner.go:130] > [INFO] 10.244.0.3:44904 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.01563145s
	I0318 12:44:40.267376    5712 command_runner.go:130] > [INFO] 10.244.0.3:34958 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002035s
	I0318 12:44:40.267376    5712 command_runner.go:130] > [INFO] 10.244.0.3:59094 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001507s
	I0318 12:44:40.267376    5712 command_runner.go:130] > [INFO] 10.244.0.3:39370 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000181001s
	I0318 12:44:40.267376    5712 command_runner.go:130] > [INFO] 10.244.1.2:40318 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000302101s
	I0318 12:44:40.267456    5712 command_runner.go:130] > [INFO] 10.244.1.2:43523 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001489s
	I0318 12:44:40.267456    5712 command_runner.go:130] > [INFO] 10.244.1.2:47882 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001346s
	I0318 12:44:40.267456    5712 command_runner.go:130] > [INFO] 10.244.1.2:38222 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000057401s
	I0318 12:44:40.267456    5712 command_runner.go:130] > [INFO] 10.244.1.2:49068 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001253s
	I0318 12:44:40.267553    5712 command_runner.go:130] > [INFO] 10.244.1.2:35375 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000582s
	I0318 12:44:40.267553    5712 command_runner.go:130] > [INFO] 10.244.1.2:40933 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000179201s
	I0318 12:44:40.267553    5712 command_runner.go:130] > [INFO] 10.244.1.2:36014 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0002051s
	I0318 12:44:40.267553    5712 command_runner.go:130] > [INFO] 10.244.0.3:37733 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000265401s
	I0318 12:44:40.267553    5712 command_runner.go:130] > [INFO] 10.244.0.3:52912 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148001s
	I0318 12:44:40.267553    5712 command_runner.go:130] > [INFO] 10.244.0.3:33147 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000143701s
	I0318 12:44:40.267553    5712 command_runner.go:130] > [INFO] 10.244.0.3:49893 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000536s
	I0318 12:44:40.267634    5712 command_runner.go:130] > [INFO] 10.244.1.2:42681 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001221s
	I0318 12:44:40.267634    5712 command_runner.go:130] > [INFO] 10.244.1.2:41416 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143s
	I0318 12:44:40.267634    5712 command_runner.go:130] > [INFO] 10.244.1.2:58254 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000241501s
	I0318 12:44:40.267634    5712 command_runner.go:130] > [INFO] 10.244.1.2:35844 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000197201s
	I0318 12:44:40.267727    5712 command_runner.go:130] > [INFO] 10.244.0.3:33559 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102201s
	I0318 12:44:40.267727    5712 command_runner.go:130] > [INFO] 10.244.0.3:53963 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000158701s
	I0318 12:44:40.267727    5712 command_runner.go:130] > [INFO] 10.244.0.3:41406 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001297s
	I0318 12:44:40.267727    5712 command_runner.go:130] > [INFO] 10.244.0.3:34685 - 5 "PTR IN 1.144.25.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000264001s
	I0318 12:44:40.267727    5712 command_runner.go:130] > [INFO] 10.244.1.2:43312 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001178s
	I0318 12:44:40.267821    5712 command_runner.go:130] > [INFO] 10.244.1.2:55281 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000235501s
	I0318 12:44:40.267821    5712 command_runner.go:130] > [INFO] 10.244.1.2:34710 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000874s
	I0318 12:44:40.267821    5712 command_runner.go:130] > [INFO] 10.244.1.2:57686 - 5 "PTR IN 1.144.25.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000557s
	I0318 12:44:40.267821    5712 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0318 12:44:40.267821    5712 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0318 12:44:42.783841    5712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 12:44:42.814476    5712 command_runner.go:130] > 1997
	I0318 12:44:42.814581    5712 api_server.go:72] duration metric: took 1m6.579646s to wait for apiserver process to appear ...
	I0318 12:44:42.814644    5712 api_server.go:88] waiting for apiserver healthz status ...
	I0318 12:44:42.824389    5712 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 12:44:42.857700    5712 command_runner.go:130] > a48a6d310b86
	I0318 12:44:42.858239    5712 logs.go:276] 1 containers: [a48a6d310b86]
	I0318 12:44:42.866946    5712 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 12:44:42.898126    5712 command_runner.go:130] > 8e7911b58c58
	I0318 12:44:42.898126    5712 logs.go:276] 1 containers: [8e7911b58c58]
	I0318 12:44:42.910537    5712 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 12:44:42.939848    5712 command_runner.go:130] > fcf17db92b35
	I0318 12:44:42.939848    5712 command_runner.go:130] > e81f1d2fdb36
	I0318 12:44:42.939848    5712 logs.go:276] 2 containers: [fcf17db92b35 e81f1d2fdb36]
	I0318 12:44:42.949341    5712 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 12:44:42.975784    5712 command_runner.go:130] > bd1e4f4d262e
	I0318 12:44:42.975784    5712 command_runner.go:130] > 47777d4c0b90
	I0318 12:44:42.975784    5712 logs.go:276] 2 containers: [bd1e4f4d262e 47777d4c0b90]
	I0318 12:44:42.984716    5712 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 12:44:43.009854    5712 command_runner.go:130] > 575b41a3a85a
	I0318 12:44:43.009854    5712 command_runner.go:130] > 4bbad08fe59a
	I0318 12:44:43.009854    5712 logs.go:276] 2 containers: [575b41a3a85a 4bbad08fe59a]
	I0318 12:44:43.018844    5712 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 12:44:43.054318    5712 command_runner.go:130] > 14ae9398d33b
	I0318 12:44:43.054318    5712 command_runner.go:130] > a54be4436901
	I0318 12:44:43.054318    5712 logs.go:276] 2 containers: [14ae9398d33b a54be4436901]
	I0318 12:44:43.066428    5712 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 12:44:43.093681    5712 command_runner.go:130] > 9fec05a61d2a
	I0318 12:44:43.093681    5712 command_runner.go:130] > 5cf42651cb21
	I0318 12:44:43.093681    5712 logs.go:276] 2 containers: [9fec05a61d2a 5cf42651cb21]
	I0318 12:44:43.093681    5712 logs.go:123] Gathering logs for kubelet ...
	I0318 12:44:43.093681    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 12:44:43.124226    5712 command_runner.go:130] > Mar 18 12:43:20 multinode-642600 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0318 12:44:43.124660    5712 command_runner.go:130] > Mar 18 12:43:20 multinode-642600 kubelet[1388]: I0318 12:43:20.841405    1388 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0318 12:44:43.124660    5712 command_runner.go:130] > Mar 18 12:43:20 multinode-642600 kubelet[1388]: I0318 12:43:20.841736    1388 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:43.124660    5712 command_runner.go:130] > Mar 18 12:43:20 multinode-642600 kubelet[1388]: I0318 12:43:20.842325    1388 server.go:895] "Client rotation is on, will bootstrap in background"
	I0318 12:44:43.124760    5712 command_runner.go:130] > Mar 18 12:43:20 multinode-642600 kubelet[1388]: E0318 12:43:20.842583    1388 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0318 12:44:43.124760    5712 command_runner.go:130] > Mar 18 12:43:20 multinode-642600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0318 12:44:43.124760    5712 command_runner.go:130] > Mar 18 12:43:20 multinode-642600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0318 12:44:43.124760    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0318 12:44:43.124829    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0318 12:44:43.124829    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0318 12:44:43.124829    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 kubelet[1445]: I0318 12:43:21.629315    1445 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0318 12:44:43.124829    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 kubelet[1445]: I0318 12:43:21.629808    1445 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:43.124890    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 kubelet[1445]: I0318 12:43:21.631096    1445 server.go:895] "Client rotation is on, will bootstrap in background"
	I0318 12:44:43.124890    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 kubelet[1445]: E0318 12:43:21.631229    1445 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0318 12:44:43.124890    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0318 12:44:43.124890    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0318 12:44:43.124890    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0318 12:44:43.124890    5712 command_runner.go:130] > Mar 18 12:43:23 multinode-642600 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0318 12:44:43.124890    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.100950    1523 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0318 12:44:43.124890    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.101311    1523 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:43.124890    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.101646    1523 server.go:895] "Client rotation is on, will bootstrap in background"
	I0318 12:44:43.124890    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.108175    1523 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0318 12:44:43.124890    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.123413    1523 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 12:44:43.124890    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.204504    1523 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0318 12:44:43.124890    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.205069    1523 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.205344    1523 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","To
pologyManagerPolicyOptions":null}
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.205667    1523 topology_manager.go:138] "Creating topology manager with none policy"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.205685    1523 container_manager_linux.go:301] "Creating device plugin manager"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.206240    1523 state_mem.go:36] "Initialized new in-memory state store"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.208674    1523 kubelet.go:393] "Attempting to sync node with API server"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.208817    1523 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.209351    1523 kubelet.go:309] "Adding apiserver pod source"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.209491    1523 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: W0318 12:43:24.212857    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-642600&limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.213311    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-642600&limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: W0318 12:43:24.219866    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.220057    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.240215    1523 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="docker" version="25.0.4" apiVersion="v1"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: W0318 12:43:24.245761    1523 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.248742    1523 server.go:1232] "Started kubelet"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.249814    1523 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.251561    1523 server.go:462] "Adding debug handlers to kubelet server"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.254285    1523 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.255480    1523 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.255659    1523 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"multinode-642600.17bddc6f5820f7a9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-642600", UID:"multinode-642600", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"multinode-642600"}, FirstTimestamp:time.Date(2024, ti
me.March, 18, 12, 43, 24, 248692649, time.Local), LastTimestamp:time.Date(2024, time.March, 18, 12, 43, 24, 248692649, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"multinode-642600"}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 172.25.148.129:8443: connect: connection refused'(may retry after sleeping)
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.259469    1523 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.261490    1523 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.265275    1523 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.270368    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-642600?timeout=10s\": dial tcp 172.25.148.129:8443: connect: connection refused" interval="200ms"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: W0318 12:43:24.275611    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.275814    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.317069    1523 reconciler_new.go:29] "Reconciler: start to sync state"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.327943    1523 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.327963    1523 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.327985    1523 state_mem.go:36] "Initialized new in-memory state store"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.329007    1523 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.329047    1523 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.329057    1523 policy_none.go:49] "None policy: Start"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.336597    1523 memory_manager.go:169] "Starting memorymanager" policy="None"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.336631    1523 state_mem.go:35] "Initializing new in-memory state store"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.337548    1523 state_mem.go:75] "Updated machine memory state"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.345495    1523 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.348154    1523 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.351399    1523 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.355603    1523 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0318 12:44:43.125247    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.356232    1523 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.357037    1523 kubelet.go:2303] "Starting kubelet main sync loop"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.359069    1523 kubelet.go:2327] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: W0318 12:43:24.367050    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.367230    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.387242    1523 iptables.go:575] "Could not set up iptables canary" err=<
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.387428    1523 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-642600\" not found"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.399151    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-642600"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.399841    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.148.129:8443: connect: connection refused" node="multinode-642600"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.460339    1523 topology_manager.go:215] "Topology Admit Handler" podUID="d5f09afee1a6ef36657c1ae3335ddda6" podNamespace="kube-system" podName="etcd-multinode-642600"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.472389    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-642600?timeout=10s\": dial tcp 172.25.148.129:8443: connect: connection refused" interval="400ms"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.474475    1523 topology_manager.go:215] "Topology Admit Handler" podUID="624de65f019baf96d4a0e2fb6064e413" podNamespace="kube-system" podName="kube-apiserver-multinode-642600"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.487469    1523 topology_manager.go:215] "Topology Admit Handler" podUID="a1608bc774d0b3e96e1b6fbbded5cb52" podNamespace="kube-system" podName="kube-controller-manager-multinode-642600"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.500311    1523 topology_manager.go:215] "Topology Admit Handler" podUID="cf50844b540be8ed0b3e767db413ac8f" podNamespace="kube-system" podName="kube-scheduler-multinode-642600"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.527553    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/d5f09afee1a6ef36657c1ae3335ddda6-etcd-certs\") pod \"etcd-multinode-642600\" (UID: \"d5f09afee1a6ef36657c1ae3335ddda6\") " pod="kube-system/etcd-multinode-642600"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.527604    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/d5f09afee1a6ef36657c1ae3335ddda6-etcd-data\") pod \"etcd-multinode-642600\" (UID: \"d5f09afee1a6ef36657c1ae3335ddda6\") " pod="kube-system/etcd-multinode-642600"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.534726    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed38da653fbefea9aeb0ebdb91f985394a7a792571704a4875018f5a6bc9abda"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.534857    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d766c4514f0bf79902b72d04d9e3a09fc2bcf5ef330f41cd3e84e63f5151f2b6"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.534873    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f100b1062a56929e04e6e4377055b065d93a28c504f060cce4695165a2c33db0"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.534885    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a9b4c05a5ccd5364b8dac2797803c98520c4f98df0fba77af7521af64a15152"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.534943    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f4709a3a45a45f0c67f457df8bb202ea2867cfedeaec4a164509190df13f510"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.534961    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3500a9f1ca84ed3d58cdd473a0c7c47a59643858c05dfd90247a09b1b43302bd"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.552869    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aad98ae0cd7c7708c7e02f0b23fc33f1ca2b404bd7fec324c21beefcbe17d009"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.571969    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29bb4d534c2e2b00dfe907d4443637851e3c3348e31bf00939cd6efad71c4e2e"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.589127    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fef37141be6db2ba71fd0f1d2feee00d6ab5d31d607323e4f5ffab4a3e70cfa5"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.614112    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-642600"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.616006    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.148.129:8443: connect: connection refused" node="multinode-642600"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629143    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a1608bc774d0b3e96e1b6fbbded5cb52-flexvolume-dir\") pod \"kube-controller-manager-multinode-642600\" (UID: \"a1608bc774d0b3e96e1b6fbbded5cb52\") " pod="kube-system/kube-controller-manager-multinode-642600"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629404    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a1608bc774d0b3e96e1b6fbbded5cb52-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-642600\" (UID: \"a1608bc774d0b3e96e1b6fbbded5cb52\") " pod="kube-system/kube-controller-manager-multinode-642600"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629689    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/624de65f019baf96d4a0e2fb6064e413-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-642600\" (UID: \"624de65f019baf96d4a0e2fb6064e413\") " pod="kube-system/kube-apiserver-multinode-642600"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629754    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a1608bc774d0b3e96e1b6fbbded5cb52-ca-certs\") pod \"kube-controller-manager-multinode-642600\" (UID: \"a1608bc774d0b3e96e1b6fbbded5cb52\") " pod="kube-system/kube-controller-manager-multinode-642600"
	I0318 12:44:43.126230    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629780    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a1608bc774d0b3e96e1b6fbbded5cb52-k8s-certs\") pod \"kube-controller-manager-multinode-642600\" (UID: \"a1608bc774d0b3e96e1b6fbbded5cb52\") " pod="kube-system/kube-controller-manager-multinode-642600"
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629802    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1608bc774d0b3e96e1b6fbbded5cb52-kubeconfig\") pod \"kube-controller-manager-multinode-642600\" (UID: \"a1608bc774d0b3e96e1b6fbbded5cb52\") " pod="kube-system/kube-controller-manager-multinode-642600"
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629825    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cf50844b540be8ed0b3e767db413ac8f-kubeconfig\") pod \"kube-scheduler-multinode-642600\" (UID: \"cf50844b540be8ed0b3e767db413ac8f\") " pod="kube-system/kube-scheduler-multinode-642600"
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629860    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/624de65f019baf96d4a0e2fb6064e413-ca-certs\") pod \"kube-apiserver-multinode-642600\" (UID: \"624de65f019baf96d4a0e2fb6064e413\") " pod="kube-system/kube-apiserver-multinode-642600"
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629919    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/624de65f019baf96d4a0e2fb6064e413-k8s-certs\") pod \"kube-apiserver-multinode-642600\" (UID: \"624de65f019baf96d4a0e2fb6064e413\") " pod="kube-system/kube-apiserver-multinode-642600"
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.875125    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-642600?timeout=10s\": dial tcp 172.25.148.129:8443: connect: connection refused" interval="800ms"
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: I0318 12:43:25.030740    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-642600"
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: E0318 12:43:25.031776    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.148.129:8443: connect: connection refused" node="multinode-642600"
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: W0318 12:43:25.266849    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: E0318 12:43:25.266980    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: I0318 12:43:25.674768    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7281d6e698ea2dc42d7d3093ccde32b770bf8367fdb58230694380f40daeb9f"
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: E0318 12:43:25.676706    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-642600?timeout=10s\": dial tcp 172.25.148.129:8443: connect: connection refused" interval="1.6s"
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: I0318 12:43:25.692553    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eca6768355c74817c50b811b96b5fcc93a181c4968c53d4d4b0d0252ff6dbd0a"
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: W0318 12:43:25.700976    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: E0318 12:43:25.701062    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: I0318 12:43:25.708111    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f62197122538f83943df8b19710794ea6ea9a9ffa884082a1a62435e9b152c3f"
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: W0318 12:43:25.731607    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: E0318 12:43:25.731695    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: W0318 12:43:25.790774    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-642600&limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: E0318 12:43:25.790867    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-642600&limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: I0318 12:43:25.868581    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-642600"
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: E0318 12:43:25.869663    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.148.129:8443: connect: connection refused" node="multinode-642600"
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 kubelet[1523]: E0318 12:43:26.129309    1523 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"multinode-642600.17bddc6f5820f7a9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-642600", UID:"multinode-642600", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"multinode-642600"}, FirstTimestamp:time.Date(2024, ti
me.March, 18, 12, 43, 24, 248692649, time.Local), LastTimestamp:time.Date(2024, time.March, 18, 12, 43, 24, 248692649, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"multinode-642600"}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 172.25.148.129:8443: connect: connection refused'(may retry after sleeping)
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:27 multinode-642600 kubelet[1523]: I0318 12:43:27.488157    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-642600"
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:30 multinode-642600 kubelet[1523]: I0318 12:43:30.626198    1523 kubelet_node_status.go:108] "Node was previously registered" node="multinode-642600"
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:30 multinode-642600 kubelet[1523]: I0318 12:43:30.626989    1523 kubelet_node_status.go:73] "Successfully registered node" node="multinode-642600"
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:30 multinode-642600 kubelet[1523]: I0318 12:43:30.640050    1523 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:30 multinode-642600 kubelet[1523]: I0318 12:43:30.642279    1523 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0318 12:44:43.127255    5712 command_runner.go:130] > Mar 18 12:43:30 multinode-642600 kubelet[1523]: I0318 12:43:30.658382    1523 setters.go:552] "Node became not ready" node="multinode-642600" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-03-18T12:43:30Z","lastTransitionTime":"2024-03-18T12:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.223393    1523 apiserver.go:52] "Watching apiserver"
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.230566    1523 topology_manager.go:215] "Topology Admit Handler" podUID="acd9d7a0-0e27-4bbb-8562-6fbf374742ca" podNamespace="kube-system" podName="kindnet-kpt4f"
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.231421    1523 topology_manager.go:215] "Topology Admit Handler" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b" podNamespace="kube-system" podName="coredns-5dd5756b68-fgn7v"
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.231644    1523 topology_manager.go:215] "Topology Admit Handler" podUID="449242c2-ad12-4da5-b339-3be7ab8a9b16" podNamespace="kube-system" podName="kube-proxy-4dg79"
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.231779    1523 topology_manager.go:215] "Topology Admit Handler" podUID="d2718b8a-26a9-4c86-bf9a-221d1ee23ceb" podNamespace="kube-system" podName="storage-provisioner"
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.231939    1523 topology_manager.go:215] "Topology Admit Handler" podUID="45969c0e-ac43-459e-95c0-86f7b76947db" podNamespace="default" podName="busybox-5b5d89c9d6-48qkw"
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.232191    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.233435    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.235227    1523 kubelet.go:1872] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-642600" podUID="4aa98cb9-f6ab-40b3-8c15-235ba4e09985"
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.236365    1523 kubelet.go:1872] "Trying to delete pod" pod="kube-system/etcd-multinode-642600" podUID="237133d7-6f1a-42ee-8cf2-a2d7564d67fc"
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.266715    1523 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.289094    1523 kubelet.go:1877] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-642600"
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.301996    1523 kubelet.go:1877] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-642600"
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.322408    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/449242c2-ad12-4da5-b339-3be7ab8a9b16-lib-modules\") pod \"kube-proxy-4dg79\" (UID: \"449242c2-ad12-4da5-b339-3be7ab8a9b16\") " pod="kube-system/kube-proxy-4dg79"
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.322793    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/acd9d7a0-0e27-4bbb-8562-6fbf374742ca-xtables-lock\") pod \"kindnet-kpt4f\" (UID: \"acd9d7a0-0e27-4bbb-8562-6fbf374742ca\") " pod="kube-system/kindnet-kpt4f"
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.323081    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d2718b8a-26a9-4c86-bf9a-221d1ee23ceb-tmp\") pod \"storage-provisioner\" (UID: \"d2718b8a-26a9-4c86-bf9a-221d1ee23ceb\") " pod="kube-system/storage-provisioner"
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.323213    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/acd9d7a0-0e27-4bbb-8562-6fbf374742ca-cni-cfg\") pod \"kindnet-kpt4f\" (UID: \"acd9d7a0-0e27-4bbb-8562-6fbf374742ca\") " pod="kube-system/kindnet-kpt4f"
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.323245    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/449242c2-ad12-4da5-b339-3be7ab8a9b16-xtables-lock\") pod \"kube-proxy-4dg79\" (UID: \"449242c2-ad12-4da5-b339-3be7ab8a9b16\") " pod="kube-system/kube-proxy-4dg79"
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.323294    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/acd9d7a0-0e27-4bbb-8562-6fbf374742ca-lib-modules\") pod \"kindnet-kpt4f\" (UID: \"acd9d7a0-0e27-4bbb-8562-6fbf374742ca\") " pod="kube-system/kindnet-kpt4f"
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.324469    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.324580    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume podName:7bc52797-b4bd-4046-b3d5-fae9c8ccd13b nodeName:}" failed. No retries permitted until 2024-03-18 12:43:31.824540428 +0000 UTC m=+7.835780164 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume") pod "coredns-5dd5756b68-fgn7v" (UID: "7bc52797-b4bd-4046-b3d5-fae9c8ccd13b") : object "kube-system"/"coredns" not registered
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.339515    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.339554    1523 projected.go:198] Error preparing data for projected volume kube-api-access-9g8n5 for pod default/busybox-5b5d89c9d6-48qkw: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.339661    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5 podName:45969c0e-ac43-459e-95c0-86f7b76947db nodeName:}" failed. No retries permitted until 2024-03-18 12:43:31.839645304 +0000 UTC m=+7.850885040 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9g8n5" (UniqueName: "kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5") pod "busybox-5b5d89c9d6-48qkw" (UID: "45969c0e-ac43-459e-95c0-86f7b76947db") : object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.384452    1523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-multinode-642600" podStartSLOduration=0.384368133 podCreationTimestamp="2024-03-18 12:43:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-18 12:43:31.360389871 +0000 UTC m=+7.371629607" watchObservedRunningTime="2024-03-18 12:43:31.384368133 +0000 UTC m=+7.395607769"
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.431280    1523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-642600" podStartSLOduration=0.431225058 podCreationTimestamp="2024-03-18 12:43:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-18 12:43:31.388015127 +0000 UTC m=+7.399254863" watchObservedRunningTime="2024-03-18 12:43:31.431225058 +0000 UTC m=+7.442464794"
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.828430    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.828605    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume podName:7bc52797-b4bd-4046-b3d5-fae9c8ccd13b nodeName:}" failed. No retries permitted until 2024-03-18 12:43:32.828568222 +0000 UTC m=+8.839807858 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume") pod "coredns-5dd5756b68-fgn7v" (UID: "7bc52797-b4bd-4046-b3d5-fae9c8ccd13b") : object "kube-system"/"coredns" not registered
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.930285    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:43.128229    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.930420    1523 projected.go:198] Error preparing data for projected volume kube-api-access-9g8n5 for pod default/busybox-5b5d89c9d6-48qkw: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.930532    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5 podName:45969c0e-ac43-459e-95c0-86f7b76947db nodeName:}" failed. No retries permitted until 2024-03-18 12:43:32.930496159 +0000 UTC m=+8.941735795 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-9g8n5" (UniqueName: "kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5") pod "busybox-5b5d89c9d6-48qkw" (UID: "45969c0e-ac43-459e-95c0-86f7b76947db") : object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: I0318 12:43:32.133795    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="889c16eb0ab731956d02a28d0337dc6ff349dc574ba10d4fc1a939fb2e09d6d3"
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: I0318 12:43:32.147805    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a2f0ccaf5c4c6c0019124eda20c358dfa8aa20f0c92ade10aa3de32608e3527"
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: I0318 12:43:32.369742    1523 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d04d3e415061983b742e6c14f1a5f562" path="/var/lib/kubelet/pods/d04d3e415061983b742e6c14f1a5f562/volumes"
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: I0318 12:43:32.371223    1523 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ec96a596e22f5afedbd92a854d1b8bec" path="/var/lib/kubelet/pods/ec96a596e22f5afedbd92a854d1b8bec/volumes"
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: I0318 12:43:32.628360    1523 kubelet.go:1872] "Trying to delete pod" pod="kube-system/etcd-multinode-642600" podUID="237133d7-6f1a-42ee-8cf2-a2d7564d67fc"
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: I0318 12:43:32.628590    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ecbdcbdad3fa79af8ef70896ae67d65b14c47b5811078c5d6d167e0f295d1bc"
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: E0318 12:43:32.836390    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: E0318 12:43:32.836523    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume podName:7bc52797-b4bd-4046-b3d5-fae9c8ccd13b nodeName:}" failed. No retries permitted until 2024-03-18 12:43:34.836498609 +0000 UTC m=+10.847738345 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume") pod "coredns-5dd5756b68-fgn7v" (UID: "7bc52797-b4bd-4046-b3d5-fae9c8ccd13b") : object "kube-system"/"coredns" not registered
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: E0318 12:43:32.937295    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: E0318 12:43:32.937349    1523 projected.go:198] Error preparing data for projected volume kube-api-access-9g8n5 for pod default/busybox-5b5d89c9d6-48qkw: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: E0318 12:43:32.937443    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5 podName:45969c0e-ac43-459e-95c0-86f7b76947db nodeName:}" failed. No retries permitted until 2024-03-18 12:43:34.937423048 +0000 UTC m=+10.948662684 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-9g8n5" (UniqueName: "kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5") pod "busybox-5b5d89c9d6-48qkw" (UID: "45969c0e-ac43-459e-95c0-86f7b76947db") : object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:33 multinode-642600 kubelet[1523]: E0318 12:43:33.359564    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:33 multinode-642600 kubelet[1523]: E0318 12:43:33.359732    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:34 multinode-642600 kubelet[1523]: E0318 12:43:34.409996    1523 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:34 multinode-642600 kubelet[1523]: E0318 12:43:34.855132    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:34 multinode-642600 kubelet[1523]: E0318 12:43:34.855288    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume podName:7bc52797-b4bd-4046-b3d5-fae9c8ccd13b nodeName:}" failed. No retries permitted until 2024-03-18 12:43:38.85526758 +0000 UTC m=+14.866507216 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume") pod "coredns-5dd5756b68-fgn7v" (UID: "7bc52797-b4bd-4046-b3d5-fae9c8ccd13b") : object "kube-system"/"coredns" not registered
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:34 multinode-642600 kubelet[1523]: E0318 12:43:34.955668    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:34 multinode-642600 kubelet[1523]: E0318 12:43:34.955718    1523 projected.go:198] Error preparing data for projected volume kube-api-access-9g8n5 for pod default/busybox-5b5d89c9d6-48qkw: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:34 multinode-642600 kubelet[1523]: E0318 12:43:34.955777    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5 podName:45969c0e-ac43-459e-95c0-86f7b76947db nodeName:}" failed. No retries permitted until 2024-03-18 12:43:38.955759519 +0000 UTC m=+14.966999155 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-9g8n5" (UniqueName: "kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5") pod "busybox-5b5d89c9d6-48qkw" (UID: "45969c0e-ac43-459e-95c0-86f7b76947db") : object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:35 multinode-642600 kubelet[1523]: E0318 12:43:35.360249    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:35 multinode-642600 kubelet[1523]: E0318 12:43:35.360337    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:37 multinode-642600 kubelet[1523]: E0318 12:43:37.360005    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:37 multinode-642600 kubelet[1523]: E0318 12:43:37.360005    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:38 multinode-642600 kubelet[1523]: E0318 12:43:38.890447    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:38 multinode-642600 kubelet[1523]: E0318 12:43:38.890642    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume podName:7bc52797-b4bd-4046-b3d5-fae9c8ccd13b nodeName:}" failed. No retries permitted until 2024-03-18 12:43:46.890560586 +0000 UTC m=+22.901800222 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume") pod "coredns-5dd5756b68-fgn7v" (UID: "7bc52797-b4bd-4046-b3d5-fae9c8ccd13b") : object "kube-system"/"coredns" not registered
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:38 multinode-642600 kubelet[1523]: E0318 12:43:38.991640    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:38 multinode-642600 kubelet[1523]: E0318 12:43:38.991754    1523 projected.go:198] Error preparing data for projected volume kube-api-access-9g8n5 for pod default/busybox-5b5d89c9d6-48qkw: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:43.129238    5712 command_runner.go:130] > Mar 18 12:43:38 multinode-642600 kubelet[1523]: E0318 12:43:38.991856    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5 podName:45969c0e-ac43-459e-95c0-86f7b76947db nodeName:}" failed. No retries permitted until 2024-03-18 12:43:46.991836746 +0000 UTC m=+23.003076482 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-9g8n5" (UniqueName: "kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5") pod "busybox-5b5d89c9d6-48qkw" (UID: "45969c0e-ac43-459e-95c0-86f7b76947db") : object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:39 multinode-642600 kubelet[1523]: E0318 12:43:39.360236    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:39 multinode-642600 kubelet[1523]: E0318 12:43:39.360508    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:39 multinode-642600 kubelet[1523]: E0318 12:43:39.425235    1523 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:41 multinode-642600 kubelet[1523]: E0318 12:43:41.360362    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:41 multinode-642600 kubelet[1523]: E0318 12:43:41.360863    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:43 multinode-642600 kubelet[1523]: E0318 12:43:43.359722    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:43 multinode-642600 kubelet[1523]: E0318 12:43:43.360308    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:44 multinode-642600 kubelet[1523]: E0318 12:43:44.438590    1523 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:45 multinode-642600 kubelet[1523]: E0318 12:43:45.360026    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:45 multinode-642600 kubelet[1523]: E0318 12:43:45.360101    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:46 multinode-642600 kubelet[1523]: E0318 12:43:46.970368    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:46 multinode-642600 kubelet[1523]: E0318 12:43:46.970583    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume podName:7bc52797-b4bd-4046-b3d5-fae9c8ccd13b nodeName:}" failed. No retries permitted until 2024-03-18 12:44:02.970562522 +0000 UTC m=+38.981802258 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume") pod "coredns-5dd5756b68-fgn7v" (UID: "7bc52797-b4bd-4046-b3d5-fae9c8ccd13b") : object "kube-system"/"coredns" not registered
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:47 multinode-642600 kubelet[1523]: E0318 12:43:47.071352    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:47 multinode-642600 kubelet[1523]: E0318 12:43:47.071390    1523 projected.go:198] Error preparing data for projected volume kube-api-access-9g8n5 for pod default/busybox-5b5d89c9d6-48qkw: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:47 multinode-642600 kubelet[1523]: E0318 12:43:47.071448    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5 podName:45969c0e-ac43-459e-95c0-86f7b76947db nodeName:}" failed. No retries permitted until 2024-03-18 12:44:03.071430219 +0000 UTC m=+39.082669855 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-9g8n5" (UniqueName: "kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5") pod "busybox-5b5d89c9d6-48qkw" (UID: "45969c0e-ac43-459e-95c0-86f7b76947db") : object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:47 multinode-642600 kubelet[1523]: E0318 12:43:47.359847    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:47 multinode-642600 kubelet[1523]: E0318 12:43:47.360318    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:49 multinode-642600 kubelet[1523]: E0318 12:43:49.360074    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:49 multinode-642600 kubelet[1523]: E0318 12:43:49.360604    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:49 multinode-642600 kubelet[1523]: E0318 12:43:49.453099    1523 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:51 multinode-642600 kubelet[1523]: E0318 12:43:51.360369    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:51 multinode-642600 kubelet[1523]: E0318 12:43:51.361016    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:53 multinode-642600 kubelet[1523]: E0318 12:43:53.359799    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:53 multinode-642600 kubelet[1523]: E0318 12:43:53.359935    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:54 multinode-642600 kubelet[1523]: E0318 12:43:54.467487    1523 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:55 multinode-642600 kubelet[1523]: E0318 12:43:55.359513    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:55 multinode-642600 kubelet[1523]: E0318 12:43:55.360047    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:57 multinode-642600 kubelet[1523]: E0318 12:43:57.359796    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:43.130247    5712 command_runner.go:130] > Mar 18 12:43:57 multinode-642600 kubelet[1523]: E0318 12:43:57.359970    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:43:59 multinode-642600 kubelet[1523]: E0318 12:43:59.360327    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:43:59 multinode-642600 kubelet[1523]: E0318 12:43:59.360455    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:43:59 multinode-642600 kubelet[1523]: E0318 12:43:59.483297    1523 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:44:01 multinode-642600 kubelet[1523]: E0318 12:44:01.359691    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:44:01 multinode-642600 kubelet[1523]: E0318 12:44:01.360228    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.032626    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.032722    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume podName:7bc52797-b4bd-4046-b3d5-fae9c8ccd13b nodeName:}" failed. No retries permitted until 2024-03-18 12:44:35.0327033 +0000 UTC m=+71.043942936 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume") pod "coredns-5dd5756b68-fgn7v" (UID: "7bc52797-b4bd-4046-b3d5-fae9c8ccd13b") : object "kube-system"/"coredns" not registered
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.134727    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.134857    1523 projected.go:198] Error preparing data for projected volume kube-api-access-9g8n5 for pod default/busybox-5b5d89c9d6-48qkw: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.135073    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5 podName:45969c0e-ac43-459e-95c0-86f7b76947db nodeName:}" failed. No retries permitted until 2024-03-18 12:44:35.13505028 +0000 UTC m=+71.146289916 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-9g8n5" (UniqueName: "kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5") pod "busybox-5b5d89c9d6-48qkw" (UID: "45969c0e-ac43-459e-95c0-86f7b76947db") : object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.360260    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.360354    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:44:04 multinode-642600 kubelet[1523]: I0318 12:44:04.124509    1523 scope.go:117] "RemoveContainer" containerID="996fb0f2ade69129acd747fc5146ef4295cc7ebd79cae8e8f881a21393ddb74a"
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:44:04 multinode-642600 kubelet[1523]: I0318 12:44:04.125880    1523 scope.go:117] "RemoveContainer" containerID="787ade2ea2cd019bbcde5523b750659f59b0187d9de4b757fba8a14f9a126460"
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:44:04 multinode-642600 kubelet[1523]: E0318 12:44:04.127355    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d2718b8a-26a9-4c86-bf9a-221d1ee23ceb)\"" pod="kube-system/storage-provisioner" podUID="d2718b8a-26a9-4c86-bf9a-221d1ee23ceb"
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:44:17 multinode-642600 kubelet[1523]: I0318 12:44:17.359956    1523 scope.go:117] "RemoveContainer" containerID="787ade2ea2cd019bbcde5523b750659f59b0187d9de4b757fba8a14f9a126460"
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:44:24 multinode-642600 kubelet[1523]: I0318 12:44:24.325657    1523 scope.go:117] "RemoveContainer" containerID="301c80f8b38cb79f051755af6af0fb604c0eee0689fd1f2d22a66e0969a9583f"
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:44:24 multinode-642600 kubelet[1523]: I0318 12:44:24.374630    1523 scope.go:117] "RemoveContainer" containerID="4b94d396876e5c7e3b8c69b01560d10ad95ff183ab3cc78a194276537cfd6cf5"
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:44:24 multinode-642600 kubelet[1523]: E0318 12:44:24.399375    1523 iptables.go:575] "Could not set up iptables canary" err=<
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:44:24 multinode-642600 kubelet[1523]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:44:24 multinode-642600 kubelet[1523]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:44:24 multinode-642600 kubelet[1523]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:44:24 multinode-642600 kubelet[1523]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 kubelet[1523]: I0318 12:44:35.962288    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1b2432b0ed66a1175586c13232eb9b9239f18a4f9a86e2a0c5f48c1407fdb14"
	I0318 12:44:43.131229    5712 command_runner.go:130] > Mar 18 12:44:36 multinode-642600 kubelet[1523]: I0318 12:44:36.079817    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1090dd57409807a15613607fd810b67863a9dd9c5a8512d7a6720906641c7f26"
	I0318 12:44:43.178920    5712 logs.go:123] Gathering logs for dmesg ...
	I0318 12:44:43.178920    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 12:44:43.200748    5712 command_runner.go:130] > [Mar18 12:41] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0318 12:44:43.200883    5712 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0318 12:44:43.200883    5712 command_runner.go:130] > [  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0318 12:44:43.200993    5712 command_runner.go:130] > [  +0.129398] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0318 12:44:43.201042    5712 command_runner.go:130] > [  +0.023142] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0318 12:44:43.201042    5712 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0318 12:44:43.201042    5712 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0318 12:44:43.201042    5712 command_runner.go:130] > [  +0.067111] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0318 12:44:43.201042    5712 command_runner.go:130] > [  +0.023049] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0318 12:44:43.201042    5712 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0318 12:44:43.201042    5712 command_runner.go:130] > [  +5.633479] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0318 12:44:43.201042    5712 command_runner.go:130] > [  +0.746575] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0318 12:44:43.201042    5712 command_runner.go:130] > [  +1.948336] systemd-fstab-generator[113]: Ignoring "noauto" option for root device
	I0318 12:44:43.201042    5712 command_runner.go:130] > [  +7.356358] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0318 12:44:43.201042    5712 command_runner.go:130] > [  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0318 12:44:43.201042    5712 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0318 12:44:43.201042    5712 command_runner.go:130] > [Mar18 12:42] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	I0318 12:44:43.201042    5712 command_runner.go:130] > [  +0.196447] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	I0318 12:44:43.201042    5712 command_runner.go:130] > [Mar18 12:43] systemd-fstab-generator[969]: Ignoring "noauto" option for root device
	I0318 12:44:43.201042    5712 command_runner.go:130] > [  +0.116812] kauditd_printk_skb: 73 callbacks suppressed
	I0318 12:44:43.201042    5712 command_runner.go:130] > [  +0.565179] systemd-fstab-generator[1008]: Ignoring "noauto" option for root device
	I0318 12:44:43.201042    5712 command_runner.go:130] > [  +0.224131] systemd-fstab-generator[1020]: Ignoring "noauto" option for root device
	I0318 12:44:43.201042    5712 command_runner.go:130] > [  +0.243543] systemd-fstab-generator[1034]: Ignoring "noauto" option for root device
	I0318 12:44:43.201042    5712 command_runner.go:130] > [  +2.986318] systemd-fstab-generator[1219]: Ignoring "noauto" option for root device
	I0318 12:44:43.201042    5712 command_runner.go:130] > [  +0.197212] systemd-fstab-generator[1231]: Ignoring "noauto" option for root device
	I0318 12:44:43.201042    5712 command_runner.go:130] > [  +0.228503] systemd-fstab-generator[1243]: Ignoring "noauto" option for root device
	I0318 12:44:43.201042    5712 command_runner.go:130] > [  +0.297734] systemd-fstab-generator[1258]: Ignoring "noauto" option for root device
	I0318 12:44:43.202038    5712 command_runner.go:130] > [  +0.969011] systemd-fstab-generator[1381]: Ignoring "noauto" option for root device
	I0318 12:44:43.202129    5712 command_runner.go:130] > [  +0.114690] kauditd_printk_skb: 205 callbacks suppressed
	I0318 12:44:43.202129    5712 command_runner.go:130] > [  +3.575437] systemd-fstab-generator[1516]: Ignoring "noauto" option for root device
	I0318 12:44:43.202129    5712 command_runner.go:130] > [  +1.537938] kauditd_printk_skb: 44 callbacks suppressed
	I0318 12:44:43.202129    5712 command_runner.go:130] > [  +6.654182] kauditd_printk_skb: 30 callbacks suppressed
	I0318 12:44:43.202129    5712 command_runner.go:130] > [  +4.384606] systemd-fstab-generator[2563]: Ignoring "noauto" option for root device
	I0318 12:44:43.202129    5712 command_runner.go:130] > [  +7.200668] kauditd_printk_skb: 70 callbacks suppressed
	I0318 12:44:43.203821    5712 logs.go:123] Gathering logs for kube-scheduler [47777d4c0b90] ...
	I0318 12:44:43.203821    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47777d4c0b90"
	I0318 12:44:43.242083    5712 command_runner.go:130] ! I0318 12:18:43.828879       1 serving.go:348] Generated self-signed cert in-memory
	I0318 12:44:43.242083    5712 command_runner.go:130] ! W0318 12:18:46.562226       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0318 12:44:43.242083    5712 command_runner.go:130] ! W0318 12:18:46.562618       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 12:44:43.242083    5712 command_runner.go:130] ! W0318 12:18:46.562705       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0318 12:44:43.242083    5712 command_runner.go:130] ! W0318 12:18:46.562793       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 12:44:43.242083    5712 command_runner.go:130] ! I0318 12:18:46.615857       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0318 12:44:43.242083    5712 command_runner.go:130] ! I0318 12:18:46.615957       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:43.242083    5712 command_runner.go:130] ! I0318 12:18:46.622177       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 12:44:43.242083    5712 command_runner.go:130] ! I0318 12:18:46.622201       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 12:44:43.242083    5712 command_runner.go:130] ! I0318 12:18:46.625084       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0318 12:44:43.242083    5712 command_runner.go:130] ! I0318 12:18:46.625162       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 12:44:43.242083    5712 command_runner.go:130] ! W0318 12:18:46.631110       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 12:44:43.242083    5712 command_runner.go:130] ! E0318 12:18:46.631164       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 12:44:43.242083    5712 command_runner.go:130] ! W0318 12:18:46.634891       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 12:44:43.242610    5712 command_runner.go:130] ! E0318 12:18:46.634917       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 12:44:43.242610    5712 command_runner.go:130] ! W0318 12:18:46.636313       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0318 12:44:43.242655    5712 command_runner.go:130] ! E0318 12:18:46.638655       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0318 12:44:43.242690    5712 command_runner.go:130] ! W0318 12:18:46.636730       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:43.242690    5712 command_runner.go:130] ! E0318 12:18:46.639099       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:43.242822    5712 command_runner.go:130] ! W0318 12:18:46.636905       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:43.242869    5712 command_runner.go:130] ! E0318 12:18:46.639254       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:43.242869    5712 command_runner.go:130] ! W0318 12:18:46.636986       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:43.242938    5712 command_runner.go:130] ! E0318 12:18:46.639495       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:43.242965    5712 command_runner.go:130] ! W0318 12:18:46.641683       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 12:44:43.242965    5712 command_runner.go:130] ! E0318 12:18:46.641953       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 12:44:43.243040    5712 command_runner.go:130] ! W0318 12:18:46.642236       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 12:44:43.243061    5712 command_runner.go:130] ! E0318 12:18:46.642375       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 12:44:43.243117    5712 command_runner.go:130] ! W0318 12:18:46.642673       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 12:44:43.243154    5712 command_runner.go:130] ! W0318 12:18:46.646073       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 12:44:43.243154    5712 command_runner.go:130] ! E0318 12:18:46.647270       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 12:44:43.243237    5712 command_runner.go:130] ! W0318 12:18:46.646147       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 12:44:43.243277    5712 command_runner.go:130] ! E0318 12:18:46.647534       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 12:44:43.243326    5712 command_runner.go:130] ! W0318 12:18:46.646208       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:43.243363    5712 command_runner.go:130] ! E0318 12:18:46.647719       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:43.243419    5712 command_runner.go:130] ! W0318 12:18:46.646271       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 12:44:43.243463    5712 command_runner.go:130] ! E0318 12:18:46.647738       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 12:44:43.243463    5712 command_runner.go:130] ! W0318 12:18:46.646322       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 12:44:43.243463    5712 command_runner.go:130] ! E0318 12:18:46.647752       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 12:44:43.243568    5712 command_runner.go:130] ! E0318 12:18:46.647915       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! W0318 12:18:46.650301       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! E0318 12:18:46.650528       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! W0318 12:18:47.471960       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! E0318 12:18:47.472093       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! W0318 12:18:47.540921       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! E0318 12:18:47.541368       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! W0318 12:18:47.545171       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! E0318 12:18:47.546126       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! W0318 12:18:47.563772       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! E0318 12:18:47.563806       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! W0318 12:18:47.597770       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! E0318 12:18:47.597873       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! W0318 12:18:47.684794       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! E0318 12:18:47.685008       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! W0318 12:18:47.685352       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! E0318 12:18:47.685509       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! W0318 12:18:47.840132       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! E0318 12:18:47.840303       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! W0318 12:18:47.879838       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! E0318 12:18:47.880363       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! W0318 12:18:47.906171       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! E0318 12:18:47.906493       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! W0318 12:18:48.059997       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! E0318 12:18:48.060049       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 12:44:43.243649    5712 command_runner.go:130] ! W0318 12:18:48.096160       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:43.244432    5712 command_runner.go:130] ! E0318 12:18:48.096304       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:43.244432    5712 command_runner.go:130] ! W0318 12:18:48.096504       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 12:44:43.244432    5712 command_runner.go:130] ! E0318 12:18:48.096662       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 12:44:43.244432    5712 command_runner.go:130] ! W0318 12:18:48.133175       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 12:44:43.244432    5712 command_runner.go:130] ! E0318 12:18:48.133469       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 12:44:43.244432    5712 command_runner.go:130] ! W0318 12:18:48.135066       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0318 12:44:43.244432    5712 command_runner.go:130] ! E0318 12:18:48.135196       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0318 12:44:43.244432    5712 command_runner.go:130] ! I0318 12:18:50.022459       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 12:44:43.244432    5712 command_runner.go:130] ! E0318 12:40:51.995231       1 run.go:74] "command failed" err="finished without leader elect"
	I0318 12:44:43.255393    5712 logs.go:123] Gathering logs for kube-proxy [4bbad08fe59a] ...
	I0318 12:44:43.255393    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbad08fe59a"
	I0318 12:44:43.288410    5712 command_runner.go:130] ! I0318 12:19:04.970720       1 server_others.go:69] "Using iptables proxy"
	I0318 12:44:43.288410    5712 command_runner.go:130] ! I0318 12:19:04.997380       1 node.go:141] Successfully retrieved node IP: 172.25.151.112
	I0318 12:44:43.288410    5712 command_runner.go:130] ! I0318 12:19:05.099028       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 12:44:43.288410    5712 command_runner.go:130] ! I0318 12:19:05.099065       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 12:44:43.288410    5712 command_runner.go:130] ! I0318 12:19:05.102885       1 server_others.go:152] "Using iptables Proxier"
	I0318 12:44:43.289495    5712 command_runner.go:130] ! I0318 12:19:05.103013       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 12:44:43.289550    5712 command_runner.go:130] ! I0318 12:19:05.103652       1 server.go:846] "Version info" version="v1.28.4"
	I0318 12:44:43.289550    5712 command_runner.go:130] ! I0318 12:19:05.103704       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:43.289550    5712 command_runner.go:130] ! I0318 12:19:05.105505       1 config.go:188] "Starting service config controller"
	I0318 12:44:43.289550    5712 command_runner.go:130] ! I0318 12:19:05.106093       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 12:44:43.289615    5712 command_runner.go:130] ! I0318 12:19:05.106131       1 config.go:97] "Starting endpoint slice config controller"
	I0318 12:44:43.289615    5712 command_runner.go:130] ! I0318 12:19:05.106138       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 12:44:43.289615    5712 command_runner.go:130] ! I0318 12:19:05.107424       1 config.go:315] "Starting node config controller"
	I0318 12:44:43.289681    5712 command_runner.go:130] ! I0318 12:19:05.107456       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 12:44:43.289681    5712 command_runner.go:130] ! I0318 12:19:05.206699       1 shared_informer.go:318] Caches are synced for service config
	I0318 12:44:43.289707    5712 command_runner.go:130] ! I0318 12:19:05.206811       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 12:44:43.289707    5712 command_runner.go:130] ! I0318 12:19:05.207857       1 shared_informer.go:318] Caches are synced for node config
	I0318 12:44:43.291854    5712 logs.go:123] Gathering logs for Docker ...
	I0318 12:44:43.291854    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 12:44:43.326122    5712 command_runner.go:130] > Mar 18 12:41:52 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 12:44:43.326122    5712 command_runner.go:130] > Mar 18 12:41:52 minikube cri-dockerd[219]: time="2024-03-18T12:41:52Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 12:44:43.326122    5712 command_runner.go:130] > Mar 18 12:41:52 minikube cri-dockerd[219]: time="2024-03-18T12:41:52Z" level=info msg="Start docker client with request timeout 0s"
	I0318 12:44:43.326122    5712 command_runner.go:130] > Mar 18 12:41:52 minikube cri-dockerd[219]: time="2024-03-18T12:41:52Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0318 12:44:43.326122    5712 command_runner.go:130] > Mar 18 12:41:52 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0318 12:44:43.326122    5712 command_runner.go:130] > Mar 18 12:41:52 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 12:44:43.326122    5712 command_runner.go:130] > Mar 18 12:41:52 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 12:44:43.326122    5712 command_runner.go:130] > Mar 18 12:41:55 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0318 12:44:43.326122    5712 command_runner.go:130] > Mar 18 12:41:55 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0318 12:44:43.326122    5712 command_runner.go:130] > Mar 18 12:41:55 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 12:44:43.326122    5712 command_runner.go:130] > Mar 18 12:41:55 minikube cri-dockerd[404]: time="2024-03-18T12:41:55Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 12:44:43.326122    5712 command_runner.go:130] > Mar 18 12:41:55 minikube cri-dockerd[404]: time="2024-03-18T12:41:55Z" level=info msg="Start docker client with request timeout 0s"
	I0318 12:44:43.326122    5712 command_runner.go:130] > Mar 18 12:41:55 minikube cri-dockerd[404]: time="2024-03-18T12:41:55Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0318 12:44:43.326122    5712 command_runner.go:130] > Mar 18 12:41:55 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0318 12:44:43.326122    5712 command_runner.go:130] > Mar 18 12:41:55 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 12:44:43.326122    5712 command_runner.go:130] > Mar 18 12:41:55 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 12:44:43.326122    5712 command_runner.go:130] > Mar 18 12:41:57 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0318 12:44:43.326650    5712 command_runner.go:130] > Mar 18 12:41:57 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0318 12:44:43.326650    5712 command_runner.go:130] > Mar 18 12:41:57 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 12:44:43.326650    5712 command_runner.go:130] > Mar 18 12:41:57 minikube cri-dockerd[424]: time="2024-03-18T12:41:57Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 12:44:43.326650    5712 command_runner.go:130] > Mar 18 12:41:57 minikube cri-dockerd[424]: time="2024-03-18T12:41:57Z" level=info msg="Start docker client with request timeout 0s"
	I0318 12:44:43.326650    5712 command_runner.go:130] > Mar 18 12:41:57 minikube cri-dockerd[424]: time="2024-03-18T12:41:57Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0318 12:44:43.326650    5712 command_runner.go:130] > Mar 18 12:41:57 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0318 12:44:43.326650    5712 command_runner.go:130] > Mar 18 12:41:57 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 12:44:43.326650    5712 command_runner.go:130] > Mar 18 12:41:57 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 12:44:43.326780    5712 command_runner.go:130] > Mar 18 12:41:59 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0318 12:44:43.326780    5712 command_runner.go:130] > Mar 18 12:41:59 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0318 12:44:43.326816    5712 command_runner.go:130] > Mar 18 12:41:59 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0318 12:44:43.326832    5712 command_runner.go:130] > Mar 18 12:41:59 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 12:44:43.326879    5712 command_runner.go:130] > Mar 18 12:41:59 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 12:44:43.326892    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 systemd[1]: Starting Docker Application Container Engine...
	I0318 12:44:43.326892    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[652]: time="2024-03-18T12:42:46.799415676Z" level=info msg="Starting up"
	I0318 12:44:43.326892    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[652]: time="2024-03-18T12:42:46.800442474Z" level=info msg="containerd not running, starting managed containerd"
	I0318 12:44:43.326892    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[652]: time="2024-03-18T12:42:46.801655972Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=658
	I0318 12:44:43.326960    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.836542309Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0318 12:44:43.326960    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.866837154Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0318 12:44:43.327005    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.866991653Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0318 12:44:43.327043    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.867166153Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0318 12:44:43.327043    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.867346253Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.868353051Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.868455451Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.868755450Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.868785850Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.868803850Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.868815950Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.869407649Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.870171948Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.873462742Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.873569242Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.873718241Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.873818241Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.874315040Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.874434440Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.874453940Z" level=info msg="metadata content store policy set" policy=shared
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880096930Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880252829Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880377329Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880397729Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880414329Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880488329Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880819128Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0318 12:44:43.327095    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880926428Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0318 12:44:43.327620    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881236528Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0318 12:44:43.327620    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881376427Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0318 12:44:43.327620    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881400527Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0318 12:44:43.327620    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881426127Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0318 12:44:43.327620    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881441527Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0318 12:44:43.327620    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881474927Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0318 12:44:43.327620    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881491327Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0318 12:44:43.327752    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881506427Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0318 12:44:43.327784    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881521027Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0318 12:44:43.327784    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881536227Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0318 12:44:43.327784    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881566927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.327784    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881586627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.327864    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881601327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.327864    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881617327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.327864    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881631227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.327948    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881646527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.327948    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881659427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.327998    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881673727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.328021    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881757827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.328021    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881783527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.328049    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881798027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.328049    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881812927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.328049    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881826827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.328049    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881844827Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0318 12:44:43.328049    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881868126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.328049    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881889326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.328049    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881902926Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0318 12:44:43.328049    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882002626Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0318 12:44:43.328049    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882117726Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0318 12:44:43.328049    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882162226Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0318 12:44:43.328049    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882178726Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0318 12:44:43.328049    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882242626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.328049    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882337926Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0318 12:44:43.328049    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882358926Z" level=info msg="NRI interface is disabled by configuration."
	I0318 12:44:43.328049    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882603625Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0318 12:44:43.328049    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882759725Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0318 12:44:43.328049    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.883033524Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0318 12:44:43.328049    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.883153424Z" level=info msg="containerd successfully booted in 0.049971s"
	I0318 12:44:43.328049    5712 command_runner.go:130] > Mar 18 12:42:47 multinode-642600 dockerd[652]: time="2024-03-18T12:42:47.858472851Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0318 12:44:43.328049    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 dockerd[652]: time="2024-03-18T12:42:48.057442718Z" level=info msg="Loading containers: start."
	I0318 12:44:43.328583    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 dockerd[652]: time="2024-03-18T12:42:48.544395210Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0318 12:44:43.328583    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 dockerd[652]: time="2024-03-18T12:42:48.632442528Z" level=info msg="Loading containers: done."
	I0318 12:44:43.328583    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 dockerd[652]: time="2024-03-18T12:42:48.662805631Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	I0318 12:44:43.328583    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 dockerd[652]: time="2024-03-18T12:42:48.663682128Z" level=info msg="Daemon has completed initialization"
	I0318 12:44:43.328583    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 dockerd[652]: time="2024-03-18T12:42:48.725498031Z" level=info msg="API listen on [::]:2376"
	I0318 12:44:43.328583    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 systemd[1]: Started Docker Application Container Engine.
	I0318 12:44:43.328583    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 dockerd[652]: time="2024-03-18T12:42:48.725911430Z" level=info msg="API listen on /var/run/docker.sock"
	I0318 12:44:43.328711    5712 command_runner.go:130] > Mar 18 12:43:15 multinode-642600 systemd[1]: Stopping Docker Application Container Engine...
	I0318 12:44:43.328711    5712 command_runner.go:130] > Mar 18 12:43:15 multinode-642600 dockerd[652]: time="2024-03-18T12:43:15.631434936Z" level=info msg="Processing signal 'terminated'"
	I0318 12:44:43.328711    5712 command_runner.go:130] > Mar 18 12:43:15 multinode-642600 dockerd[652]: time="2024-03-18T12:43:15.633587433Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0318 12:44:43.328798    5712 command_runner.go:130] > Mar 18 12:43:15 multinode-642600 dockerd[652]: time="2024-03-18T12:43:15.634258932Z" level=info msg="Daemon shutdown complete"
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:15 multinode-642600 dockerd[652]: time="2024-03-18T12:43:15.634450831Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:15 multinode-642600 dockerd[652]: time="2024-03-18T12:43:15.634476831Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 systemd[1]: docker.service: Deactivated successfully.
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 systemd[1]: Stopped Docker Application Container Engine.
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 systemd[1]: Starting Docker Application Container Engine...
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:16.717087499Z" level=info msg="Starting up"
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:16.718262797Z" level=info msg="containerd not running, starting managed containerd"
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:16.719705495Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1048
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.754738639Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784193992Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784236292Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784275292Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784291492Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784317492Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784331992Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784550091Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784651691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784673391Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784704091Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784764391Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784996290Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.787641686Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.787744286Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.787950186Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.788044886Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.788091986Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.788127185Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.788138585Z" level=info msg="metadata content store policy set" policy=shared
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.789136284Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.789269784Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0318 12:44:43.328851    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.789298984Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0318 12:44:43.329419    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.789320484Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0318 12:44:43.329419    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.789342084Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0318 12:44:43.329419    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.789644383Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0318 12:44:43.329419    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.790600382Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0318 12:44:43.329419    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.791760980Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0318 12:44:43.329419    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.791832280Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0318 12:44:43.329419    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.791851580Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0318 12:44:43.329576    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.791866579Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0318 12:44:43.329576    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.791880279Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.791969479Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.791989879Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792004479Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792018079Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792030379Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792042479Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792063279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792077879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792090579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792103979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792117779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792135679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792148379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792161279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792174179Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792188479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792199579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792211479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792223379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792238079Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792261579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792276079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.329633    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792287879Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0318 12:44:43.330153    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792337479Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0318 12:44:43.330153    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792356479Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0318 12:44:43.330153    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792368079Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0318 12:44:43.330153    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792380379Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0318 12:44:43.330153    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792530178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0318 12:44:43.330153    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792576778Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0318 12:44:43.330153    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792591078Z" level=info msg="NRI interface is disabled by configuration."
	I0318 12:44:43.330360    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792811378Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0318 12:44:43.330515    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792927678Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0318 12:44:43.330537    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.793108678Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0318 12:44:43.330537    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.793160477Z" level=info msg="containerd successfully booted in 0.039931s"
	I0318 12:44:43.330607    5712 command_runner.go:130] > Mar 18 12:43:17 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:17.767243919Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0318 12:44:43.330607    5712 command_runner.go:130] > Mar 18 12:43:17 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:17.800090666Z" level=info msg="Loading containers: start."
	I0318 12:44:43.330607    5712 command_runner.go:130] > Mar 18 12:43:18 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:18.103803081Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0318 12:44:43.330607    5712 command_runner.go:130] > Mar 18 12:43:18 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:18.187726546Z" level=info msg="Loading containers: done."
	I0318 12:44:43.330672    5712 command_runner.go:130] > Mar 18 12:43:18 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:18.216487100Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	I0318 12:44:43.330672    5712 command_runner.go:130] > Mar 18 12:43:18 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:18.216648600Z" level=info msg="Daemon has completed initialization"
	I0318 12:44:43.330733    5712 command_runner.go:130] > Mar 18 12:43:18 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:18.271691012Z" level=info msg="API listen on /var/run/docker.sock"
	I0318 12:44:43.330733    5712 command_runner.go:130] > Mar 18 12:43:18 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:18.271966711Z" level=info msg="API listen on [::]:2376"
	I0318 12:44:43.330785    5712 command_runner.go:130] > Mar 18 12:43:18 multinode-642600 systemd[1]: Started Docker Application Container Engine.
	I0318 12:44:43.330785    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 12:44:43.330785    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 12:44:43.330785    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Start docker client with request timeout 0s"
	I0318 12:44:43.330785    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0318 12:44:43.330785    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Loaded network plugin cni"
	I0318 12:44:43.330785    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0318 12:44:43.330785    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Docker Info: &{ID:aa9100d3-1595-41ce-b36f-06932aef3ecb Containers:18 ContainersRunning:0 ContainersPaused:0 ContainersStopped:18 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:26 OomKillDisable:false NGoroutines:53 SystemTime:2024-03-18T12:43:19.415553382Z LoggingDriver:json-file CgroupDriver:cgroupfs CgroupVersion:2 NEventsListener:0 Ke
rnelVersion:5.10.207 OperatingSystem:Buildroot 2023.02.9 OSVersion:2023.02.9 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0002da070 NCPU:2 MemTotal:2216210432 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:multinode-642600 Labels:[provider=hyperv] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dcf2847247e18caba8dce86522029642f60fe96b Expected:dcf2847247e18caba8dce86522029642f60fe96b} RuncCommit:{ID:51d5e94601ceffbbd85688df1c928ecccbfa4685 Expected:51d5e94601ceffbbd85688df1c928ecccbfa4685} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[nam
e=seccomp,profile=builtin name=cgroupns] ProductLicense:Community Engine DefaultAddressPools:[] Warnings:[]}"
	I0318 12:44:43.330785    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0318 12:44:43.330785    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0318 12:44:43.330785    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0318 12:44:43.330785    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Start cri-dockerd grpc backend"
	I0318 12:44:43.330785    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0318 12:44:43.330785    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:24Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-5dd5756b68-fgn7v_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"ed38da653fbefea9aeb0ebdb91f985394a7a792571704a4875018f5a6bc9abda\""
	I0318 12:44:43.330785    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:24Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-5b5d89c9d6-48qkw_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"29bb4d534c2e2b00dfe907d4443637851e3c3348e31bf00939cd6efad71c4e2e\""
	I0318 12:44:43.330785    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.316277241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:43.330785    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.317878239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:43.330785    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.318571937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.330785    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.319101537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.330785    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.356638277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:43.330785    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.356750476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:43.331313    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.356767376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.331313    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.357118676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.331313    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.418245378Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:43.331313    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.421018274Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:43.331313    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.421217073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.331313    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.422102972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.331448    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.428274662Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:43.331511    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.428365762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:43.331548    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.428455862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.331548    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.428580261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.331548    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/67004ee038ee4247f6f751987304426067a63cee8c1636408dd16efea728ba78/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f62197122538f83943df8b19710794ea6ea9a9ffa884082a1a62435e9b152c3f/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/eca6768355c74817c50b811b96b5fcc93a181c4968c53d4d4b0d0252ff6dbd0a/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a7281d6e698ea2dc42d7d3093ccde32b770bf8367fdb58230694380f40daeb9f/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.879224940Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.879310840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.879325040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.879857239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.050226267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.051715465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.056267457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.056729856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.064877643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.065332743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.065495042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.065849742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.091573301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.091639201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.091652401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.091761800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:30 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:30Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.923135971Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.924017669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:43.331613    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.924165569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.332136    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.924385369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.332136    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.955673419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:43.332275    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.955753819Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:43.332275    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.955772119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.956168818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.964148405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.964256705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.964669604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.964999404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7a2f0ccaf5c4c6c0019124eda20c358dfa8aa20f0c92ade10aa3de32608e3527/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/889c16eb0ab731956d02a28d0337dc6ff349dc574ba10d4fc1a939fb2e09d6d3/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.391303322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.391389722Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.391408822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.391535621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.413113087Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.413460286Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.413726486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.414492285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5ecbdcbdad3fa79af8ef70896ae67d65b14c47b5811078c5d6d167e0f295d1bc/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.850170088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.850431387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.850449987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.850590387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:03.011137468Z" level=info msg="shim disconnected" id=787ade2ea2cd019bbcde5523b750659f59b0187d9de4b757fba8a14f9a126460 namespace=moby
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:03.011334567Z" level=warning msg="cleaning up after shim disconnected" id=787ade2ea2cd019bbcde5523b750659f59b0187d9de4b757fba8a14f9a126460 namespace=moby
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:03.011364567Z" level=info msg="cleaning up dead shim" namespace=moby
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 dockerd[1042]: time="2024-03-18T12:44:03.012148165Z" level=info msg="ignoring event" container=787ade2ea2cd019bbcde5523b750659f59b0187d9de4b757fba8a14f9a126460 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:44:17 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:17.562340104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:44:17 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:17.562524303Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:44:17 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:17.562584503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:44:17 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:17.563253802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.376262769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.376780468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.377021468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.377223268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:44:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1090dd57409807a15613607fd810b67863a9dd9c5a8512d7a6720906641c7f26/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.684170919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.684458920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:43.332339    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.684558520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.333178    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.685142822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.333178    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.901354745Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:43.333178    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.901518146Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:43.333178    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.901538746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.333178    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.901651446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.333374    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:44:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e1b2432b0ed66a1175586c13232eb9b9239f18a4f9a86e2a0c5f48c1407fdb14/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0318 12:44:43.333374    5712 command_runner.go:130] > Mar 18 12:44:36 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:36.227440411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:43.333374    5712 command_runner.go:130] > Mar 18 12:44:36 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:36.227939926Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:43.333432    5712 command_runner.go:130] > Mar 18 12:44:36 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:36.228081131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.333432    5712 command_runner.go:130] > Mar 18 12:44:36 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:36.228507343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:43.333432    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:43.333432    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:43.333432    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:43.333432    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:43.333432    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:43.333432    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:43.333432    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:43.333432    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:43.333432    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:43.333432    5712 command_runner.go:130] > Mar 18 12:44:40 multinode-642600 dockerd[1042]: 2024/03/18 12:44:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:43.333432    5712 command_runner.go:130] > Mar 18 12:44:40 multinode-642600 dockerd[1042]: 2024/03/18 12:44:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:43.333432    5712 command_runner.go:130] > Mar 18 12:44:40 multinode-642600 dockerd[1042]: 2024/03/18 12:44:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:43.333432    5712 command_runner.go:130] > Mar 18 12:44:43 multinode-642600 dockerd[1042]: 2024/03/18 12:44:43 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:43.333432    5712 command_runner.go:130] > Mar 18 12:44:43 multinode-642600 dockerd[1042]: 2024/03/18 12:44:43 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:43.365927    5712 logs.go:123] Gathering logs for describe nodes ...
	I0318 12:44:43.365927    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 12:44:43.545945    5712 command_runner.go:130] > Name:               multinode-642600
	I0318 12:44:43.545945    5712 command_runner.go:130] > Roles:              control-plane
	I0318 12:44:43.545945    5712 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0318 12:44:43.545945    5712 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0318 12:44:43.545945    5712 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0318 12:44:43.545945    5712 command_runner.go:130] >                     kubernetes.io/hostname=multinode-642600
	I0318 12:44:43.545945    5712 command_runner.go:130] >                     kubernetes.io/os=linux
	I0318 12:44:43.545945    5712 command_runner.go:130] >                     minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd
	I0318 12:44:43.545945    5712 command_runner.go:130] >                     minikube.k8s.io/name=multinode-642600
	I0318 12:44:43.545945    5712 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0318 12:44:43.545945    5712 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_18T12_18_52_0700
	I0318 12:44:43.545945    5712 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0318 12:44:43.545945    5712 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0318 12:44:43.545945    5712 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0318 12:44:43.545945    5712 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0318 12:44:43.545945    5712 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0318 12:44:43.545945    5712 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0318 12:44:43.545945    5712 command_runner.go:130] > CreationTimestamp:  Mon, 18 Mar 2024 12:18:46 +0000
	I0318 12:44:43.545945    5712 command_runner.go:130] > Taints:             <none>
	I0318 12:44:43.545945    5712 command_runner.go:130] > Unschedulable:      false
	I0318 12:44:43.545945    5712 command_runner.go:130] > Lease:
	I0318 12:44:43.545945    5712 command_runner.go:130] >   HolderIdentity:  multinode-642600
	I0318 12:44:43.545945    5712 command_runner.go:130] >   AcquireTime:     <unset>
	I0318 12:44:43.545945    5712 command_runner.go:130] >   RenewTime:       Mon, 18 Mar 2024 12:44:41 +0000
	I0318 12:44:43.545945    5712 command_runner.go:130] > Conditions:
	I0318 12:44:43.545945    5712 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0318 12:44:43.545945    5712 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0318 12:44:43.545945    5712 command_runner.go:130] >   MemoryPressure   False   Mon, 18 Mar 2024 12:44:11 +0000   Mon, 18 Mar 2024 12:18:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0318 12:44:43.545945    5712 command_runner.go:130] >   DiskPressure     False   Mon, 18 Mar 2024 12:44:11 +0000   Mon, 18 Mar 2024 12:18:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0318 12:44:43.545945    5712 command_runner.go:130] >   PIDPressure      False   Mon, 18 Mar 2024 12:44:11 +0000   Mon, 18 Mar 2024 12:18:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0318 12:44:43.545945    5712 command_runner.go:130] >   Ready            True    Mon, 18 Mar 2024 12:44:11 +0000   Mon, 18 Mar 2024 12:44:11 +0000   KubeletReady                 kubelet is posting ready status
	I0318 12:44:43.545945    5712 command_runner.go:130] > Addresses:
	I0318 12:44:43.545945    5712 command_runner.go:130] >   InternalIP:  172.25.148.129
	I0318 12:44:43.545945    5712 command_runner.go:130] >   Hostname:    multinode-642600
	I0318 12:44:43.545945    5712 command_runner.go:130] > Capacity:
	I0318 12:44:43.545945    5712 command_runner.go:130] >   cpu:                2
	I0318 12:44:43.545945    5712 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 12:44:43.545945    5712 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 12:44:43.545945    5712 command_runner.go:130] >   memory:             2164268Ki
	I0318 12:44:43.545945    5712 command_runner.go:130] >   pods:               110
	I0318 12:44:43.545945    5712 command_runner.go:130] > Allocatable:
	I0318 12:44:43.545945    5712 command_runner.go:130] >   cpu:                2
	I0318 12:44:43.545945    5712 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 12:44:43.545945    5712 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 12:44:43.545945    5712 command_runner.go:130] >   memory:             2164268Ki
	I0318 12:44:43.545945    5712 command_runner.go:130] >   pods:               110
	I0318 12:44:43.545945    5712 command_runner.go:130] > System Info:
	I0318 12:44:43.545945    5712 command_runner.go:130] >   Machine ID:                 021cb44913fc4689ab25739f723ae3da
	I0318 12:44:43.545945    5712 command_runner.go:130] >   System UUID:                8a1bcbab-f132-7f42-b33a-a7db97e0afe6
	I0318 12:44:43.545945    5712 command_runner.go:130] >   Boot ID:                    f11360a5-920e-4374-9d22-d06f111079d8
	I0318 12:44:43.545945    5712 command_runner.go:130] >   Kernel Version:             5.10.207
	I0318 12:44:43.546950    5712 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Operating System:           linux
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Architecture:               amd64
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0318 12:44:43.546950    5712 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0318 12:44:43.546950    5712 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0318 12:44:43.546950    5712 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0318 12:44:43.546950    5712 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0318 12:44:43.546950    5712 command_runner.go:130] >   default                     busybox-5b5d89c9d6-48qkw                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0318 12:44:43.546950    5712 command_runner.go:130] >   kube-system                 coredns-5dd5756b68-fgn7v                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     25m
	I0318 12:44:43.546950    5712 command_runner.go:130] >   kube-system                 etcd-multinode-642600                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         72s
	I0318 12:44:43.546950    5712 command_runner.go:130] >   kube-system                 kindnet-kpt4f                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      25m
	I0318 12:44:43.546950    5712 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-642600             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	I0318 12:44:43.546950    5712 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-642600    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	I0318 12:44:43.546950    5712 command_runner.go:130] >   kube-system                 kube-proxy-4dg79                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	I0318 12:44:43.546950    5712 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-642600             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	I0318 12:44:43.546950    5712 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	I0318 12:44:43.546950    5712 command_runner.go:130] > Allocated resources:
	I0318 12:44:43.546950    5712 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Resource           Requests     Limits
	I0318 12:44:43.546950    5712 command_runner.go:130] >   --------           --------     ------
	I0318 12:44:43.546950    5712 command_runner.go:130] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0318 12:44:43.546950    5712 command_runner.go:130] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0318 12:44:43.546950    5712 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0318 12:44:43.546950    5712 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0318 12:44:43.546950    5712 command_runner.go:130] > Events:
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0318 12:44:43.546950    5712 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Normal  Starting                 25m                kube-proxy       
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Normal  Starting                 69s                kube-proxy       
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Normal  Starting                 26m                kubelet          Starting kubelet.
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Normal  NodeHasSufficientMemory  26m (x8 over 26m)  kubelet          Node multinode-642600 status is now: NodeHasSufficientMemory
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    26m (x8 over 26m)  kubelet          Node multinode-642600 status is now: NodeHasNoDiskPressure
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Normal  NodeHasSufficientPID     26m (x7 over 26m)  kubelet          Node multinode-642600 status is now: NodeHasSufficientPID
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Normal  NodeAllocatableEnforced  26m                kubelet          Updated Node Allocatable limit across pods
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Normal  Starting                 25m                kubelet          Starting kubelet.
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Normal  NodeHasSufficientPID     25m                kubelet          Node multinode-642600 status is now: NodeHasSufficientPID
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    25m                kubelet          Node multinode-642600 status is now: NodeHasNoDiskPressure
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Normal  NodeAllocatableEnforced  25m                kubelet          Updated Node Allocatable limit across pods
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Normal  NodeHasSufficientMemory  25m                kubelet          Node multinode-642600 status is now: NodeHasSufficientMemory
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Normal  RegisteredNode           25m                node-controller  Node multinode-642600 event: Registered Node multinode-642600 in Controller
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Normal  NodeReady                25m                kubelet          Node multinode-642600 status is now: NodeReady
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Normal  Starting                 79s                kubelet          Starting kubelet.
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Normal  NodeHasSufficientMemory  79s (x8 over 79s)  kubelet          Node multinode-642600 status is now: NodeHasSufficientMemory
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    79s (x8 over 79s)  kubelet          Node multinode-642600 status is now: NodeHasNoDiskPressure
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Normal  NodeHasSufficientPID     79s (x7 over 79s)  kubelet          Node multinode-642600 status is now: NodeHasSufficientPID
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Normal  NodeAllocatableEnforced  79s                kubelet          Updated Node Allocatable limit across pods
	I0318 12:44:43.546950    5712 command_runner.go:130] >   Normal  RegisteredNode           60s                node-controller  Node multinode-642600 event: Registered Node multinode-642600 in Controller
	I0318 12:44:43.574954    5712 command_runner.go:130] > Name:               multinode-642600-m02
	I0318 12:44:43.574978    5712 command_runner.go:130] > Roles:              <none>
	I0318 12:44:43.574978    5712 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0318 12:44:43.574978    5712 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0318 12:44:43.574978    5712 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0318 12:44:43.574978    5712 command_runner.go:130] >                     kubernetes.io/hostname=multinode-642600-m02
	I0318 12:44:43.574978    5712 command_runner.go:130] >                     kubernetes.io/os=linux
	I0318 12:44:43.574978    5712 command_runner.go:130] >                     minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd
	I0318 12:44:43.575084    5712 command_runner.go:130] >                     minikube.k8s.io/name=multinode-642600
	I0318 12:44:43.575084    5712 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0318 12:44:43.575084    5712 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_18T12_22_13_0700
	I0318 12:44:43.575084    5712 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0318 12:44:43.575168    5712 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0318 12:44:43.575168    5712 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0318 12:44:43.575168    5712 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0318 12:44:43.575168    5712 command_runner.go:130] > CreationTimestamp:  Mon, 18 Mar 2024 12:22:12 +0000
	I0318 12:44:43.575168    5712 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0318 12:44:43.575168    5712 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0318 12:44:43.575168    5712 command_runner.go:130] > Unschedulable:      false
	I0318 12:44:43.575275    5712 command_runner.go:130] > Lease:
	I0318 12:44:43.575275    5712 command_runner.go:130] >   HolderIdentity:  multinode-642600-m02
	I0318 12:44:43.575275    5712 command_runner.go:130] >   AcquireTime:     <unset>
	I0318 12:44:43.575275    5712 command_runner.go:130] >   RenewTime:       Mon, 18 Mar 2024 12:40:15 +0000
	I0318 12:44:43.575275    5712 command_runner.go:130] > Conditions:
	I0318 12:44:43.575351    5712 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0318 12:44:43.575374    5712 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0318 12:44:43.575400    5712 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 18 Mar 2024 12:38:34 +0000   Mon, 18 Mar 2024 12:44:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:43.575400    5712 command_runner.go:130] >   DiskPressure     Unknown   Mon, 18 Mar 2024 12:38:34 +0000   Mon, 18 Mar 2024 12:44:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:43.575400    5712 command_runner.go:130] >   PIDPressure      Unknown   Mon, 18 Mar 2024 12:38:34 +0000   Mon, 18 Mar 2024 12:44:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:43.575400    5712 command_runner.go:130] >   Ready            Unknown   Mon, 18 Mar 2024 12:38:34 +0000   Mon, 18 Mar 2024 12:44:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:43.575400    5712 command_runner.go:130] > Addresses:
	I0318 12:44:43.575400    5712 command_runner.go:130] >   InternalIP:  172.25.159.102
	I0318 12:44:43.575400    5712 command_runner.go:130] >   Hostname:    multinode-642600-m02
	I0318 12:44:43.575400    5712 command_runner.go:130] > Capacity:
	I0318 12:44:43.575400    5712 command_runner.go:130] >   cpu:                2
	I0318 12:44:43.575400    5712 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 12:44:43.575400    5712 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 12:44:43.575400    5712 command_runner.go:130] >   memory:             2164268Ki
	I0318 12:44:43.575400    5712 command_runner.go:130] >   pods:               110
	I0318 12:44:43.575400    5712 command_runner.go:130] > Allocatable:
	I0318 12:44:43.575400    5712 command_runner.go:130] >   cpu:                2
	I0318 12:44:43.575400    5712 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 12:44:43.575400    5712 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 12:44:43.575400    5712 command_runner.go:130] >   memory:             2164268Ki
	I0318 12:44:43.575400    5712 command_runner.go:130] >   pods:               110
	I0318 12:44:43.575400    5712 command_runner.go:130] > System Info:
	I0318 12:44:43.575400    5712 command_runner.go:130] >   Machine ID:                 3840c114554e41ff9ded1410244d8aba
	I0318 12:44:43.575400    5712 command_runner.go:130] >   System UUID:                23dbf5b1-f940-4749-8caf-1ae12d869a30
	I0318 12:44:43.575400    5712 command_runner.go:130] >   Boot ID:                    9a3fcab5-beb6-4505-b112-82809850bba3
	I0318 12:44:43.575400    5712 command_runner.go:130] >   Kernel Version:             5.10.207
	I0318 12:44:43.575400    5712 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0318 12:44:43.575400    5712 command_runner.go:130] >   Operating System:           linux
	I0318 12:44:43.575400    5712 command_runner.go:130] >   Architecture:               amd64
	I0318 12:44:43.575400    5712 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0318 12:44:43.575400    5712 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0318 12:44:43.575400    5712 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0318 12:44:43.575400    5712 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0318 12:44:43.575400    5712 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0318 12:44:43.575400    5712 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0318 12:44:43.575400    5712 command_runner.go:130] >   Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0318 12:44:43.575400    5712 command_runner.go:130] >   ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	I0318 12:44:43.575400    5712 command_runner.go:130] >   default                     busybox-5b5d89c9d6-hmhdf    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0318 12:44:43.575400    5712 command_runner.go:130] >   kube-system                 kindnet-d5llj               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      22m
	I0318 12:44:43.575400    5712 command_runner.go:130] >   kube-system                 kube-proxy-vts9f            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	I0318 12:44:43.575400    5712 command_runner.go:130] > Allocated resources:
	I0318 12:44:43.575400    5712 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0318 12:44:43.575400    5712 command_runner.go:130] >   Resource           Requests   Limits
	I0318 12:44:43.575400    5712 command_runner.go:130] >   --------           --------   ------
	I0318 12:44:43.575400    5712 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0318 12:44:43.575400    5712 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0318 12:44:43.575400    5712 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0318 12:44:43.575400    5712 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0318 12:44:43.575400    5712 command_runner.go:130] > Events:
	I0318 12:44:43.575400    5712 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0318 12:44:43.575400    5712 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0318 12:44:43.575400    5712 command_runner.go:130] >   Normal  Starting                 22m                kube-proxy       
	I0318 12:44:43.575400    5712 command_runner.go:130] >   Normal  NodeHasSufficientMemory  22m (x5 over 22m)  kubelet          Node multinode-642600-m02 status is now: NodeHasSufficientMemory
	I0318 12:44:43.575400    5712 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    22m (x5 over 22m)  kubelet          Node multinode-642600-m02 status is now: NodeHasNoDiskPressure
	I0318 12:44:43.575400    5712 command_runner.go:130] >   Normal  NodeHasSufficientPID     22m (x5 over 22m)  kubelet          Node multinode-642600-m02 status is now: NodeHasSufficientPID
	I0318 12:44:43.575933    5712 command_runner.go:130] >   Normal  RegisteredNode           22m                node-controller  Node multinode-642600-m02 event: Registered Node multinode-642600-m02 in Controller
	I0318 12:44:43.575933    5712 command_runner.go:130] >   Normal  NodeReady                22m                kubelet          Node multinode-642600-m02 status is now: NodeReady
	I0318 12:44:43.575933    5712 command_runner.go:130] >   Normal  RegisteredNode           60s                node-controller  Node multinode-642600-m02 event: Registered Node multinode-642600-m02 in Controller
	I0318 12:44:43.575933    5712 command_runner.go:130] >   Normal  NodeNotReady             20s                node-controller  Node multinode-642600-m02 status is now: NodeNotReady
	I0318 12:44:43.605589    5712 command_runner.go:130] > Name:               multinode-642600-m03
	I0318 12:44:43.605589    5712 command_runner.go:130] > Roles:              <none>
	I0318 12:44:43.605589    5712 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0318 12:44:43.605589    5712 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0318 12:44:43.605589    5712 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0318 12:44:43.605589    5712 command_runner.go:130] >                     kubernetes.io/hostname=multinode-642600-m03
	I0318 12:44:43.605589    5712 command_runner.go:130] >                     kubernetes.io/os=linux
	I0318 12:44:43.605589    5712 command_runner.go:130] >                     minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd
	I0318 12:44:43.605589    5712 command_runner.go:130] >                     minikube.k8s.io/name=multinode-642600
	I0318 12:44:43.605589    5712 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0318 12:44:43.605589    5712 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_18T12_38_47_0700
	I0318 12:44:43.605589    5712 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0318 12:44:43.605589    5712 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0318 12:44:43.605589    5712 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0318 12:44:43.605589    5712 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0318 12:44:43.605589    5712 command_runner.go:130] > CreationTimestamp:  Mon, 18 Mar 2024 12:38:46 +0000
	I0318 12:44:43.605589    5712 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0318 12:44:43.605589    5712 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0318 12:44:43.605589    5712 command_runner.go:130] > Unschedulable:      false
	I0318 12:44:43.605589    5712 command_runner.go:130] > Lease:
	I0318 12:44:43.605589    5712 command_runner.go:130] >   HolderIdentity:  multinode-642600-m03
	I0318 12:44:43.605589    5712 command_runner.go:130] >   AcquireTime:     <unset>
	I0318 12:44:43.605589    5712 command_runner.go:130] >   RenewTime:       Mon, 18 Mar 2024 12:39:48 +0000
	I0318 12:44:43.605589    5712 command_runner.go:130] > Conditions:
	I0318 12:44:43.605589    5712 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0318 12:44:43.605589    5712 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0318 12:44:43.605589    5712 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 18 Mar 2024 12:38:52 +0000   Mon, 18 Mar 2024 12:40:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:43.605589    5712 command_runner.go:130] >   DiskPressure     Unknown   Mon, 18 Mar 2024 12:38:52 +0000   Mon, 18 Mar 2024 12:40:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:43.605589    5712 command_runner.go:130] >   PIDPressure      Unknown   Mon, 18 Mar 2024 12:38:52 +0000   Mon, 18 Mar 2024 12:40:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:43.605589    5712 command_runner.go:130] >   Ready            Unknown   Mon, 18 Mar 2024 12:38:52 +0000   Mon, 18 Mar 2024 12:40:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:43.605589    5712 command_runner.go:130] > Addresses:
	I0318 12:44:43.605589    5712 command_runner.go:130] >   InternalIP:  172.25.157.200
	I0318 12:44:43.605589    5712 command_runner.go:130] >   Hostname:    multinode-642600-m03
	I0318 12:44:43.605589    5712 command_runner.go:130] > Capacity:
	I0318 12:44:43.605589    5712 command_runner.go:130] >   cpu:                2
	I0318 12:44:43.605589    5712 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 12:44:43.605589    5712 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 12:44:43.605589    5712 command_runner.go:130] >   memory:             2164268Ki
	I0318 12:44:43.605589    5712 command_runner.go:130] >   pods:               110
	I0318 12:44:43.605589    5712 command_runner.go:130] > Allocatable:
	I0318 12:44:43.605589    5712 command_runner.go:130] >   cpu:                2
	I0318 12:44:43.605589    5712 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 12:44:43.606611    5712 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 12:44:43.606611    5712 command_runner.go:130] >   memory:             2164268Ki
	I0318 12:44:43.606611    5712 command_runner.go:130] >   pods:               110
	I0318 12:44:43.606611    5712 command_runner.go:130] > System Info:
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Machine ID:                 b858c7f1c1bc42a69e1927ccc26ea5ce
	I0318 12:44:43.606611    5712 command_runner.go:130] >   System UUID:                8c4fd36f-ab8b-5447-9df2-542afafc5ab4
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Boot ID:                    cea0ecfe-24ab-4614-a808-1e2a7a960f26
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Kernel Version:             5.10.207
	I0318 12:44:43.606611    5712 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Operating System:           linux
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Architecture:               amd64
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0318 12:44:43.606611    5712 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0318 12:44:43.606611    5712 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0318 12:44:43.606611    5712 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0318 12:44:43.606611    5712 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0318 12:44:43.606611    5712 command_runner.go:130] >   kube-system                 kindnet-thkjp       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17m
	I0318 12:44:43.606611    5712 command_runner.go:130] >   kube-system                 kube-proxy-khbjt    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	I0318 12:44:43.606611    5712 command_runner.go:130] > Allocated resources:
	I0318 12:44:43.606611    5712 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Resource           Requests   Limits
	I0318 12:44:43.606611    5712 command_runner.go:130] >   --------           --------   ------
	I0318 12:44:43.606611    5712 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0318 12:44:43.606611    5712 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0318 12:44:43.606611    5712 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0318 12:44:43.606611    5712 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0318 12:44:43.606611    5712 command_runner.go:130] > Events:
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0318 12:44:43.606611    5712 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Normal  Starting                 17m                    kube-proxy       
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Normal  Starting                 5m54s                  kube-proxy       
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Normal  NodeHasSufficientMemory  17m (x5 over 17m)      kubelet          Node multinode-642600-m03 status is now: NodeHasSufficientMemory
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    17m (x5 over 17m)      kubelet          Node multinode-642600-m03 status is now: NodeHasNoDiskPressure
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Normal  NodeHasSufficientPID     17m (x5 over 17m)      kubelet          Node multinode-642600-m03 status is now: NodeHasSufficientPID
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Normal  NodeReady                17m                    kubelet          Node multinode-642600-m03 status is now: NodeReady
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Normal  Starting                 5m57s                  kubelet          Starting kubelet.
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m57s (x2 over 5m57s)  kubelet          Node multinode-642600-m03 status is now: NodeHasSufficientMemory
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m57s (x2 over 5m57s)  kubelet          Node multinode-642600-m03 status is now: NodeHasNoDiskPressure
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m57s (x2 over 5m57s)  kubelet          Node multinode-642600-m03 status is now: NodeHasSufficientPID
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m57s                  kubelet          Updated Node Allocatable limit across pods
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Normal  RegisteredNode           5m56s                  node-controller  Node multinode-642600-m03 event: Registered Node multinode-642600-m03 in Controller
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Normal  NodeReady                5m51s                  kubelet          Node multinode-642600-m03 status is now: NodeReady
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Normal  NodeNotReady             4m10s                  node-controller  Node multinode-642600-m03 status is now: NodeNotReady
	I0318 12:44:43.606611    5712 command_runner.go:130] >   Normal  RegisteredNode           60s                    node-controller  Node multinode-642600-m03 event: Registered Node multinode-642600-m03 in Controller
	I0318 12:44:43.618596    5712 logs.go:123] Gathering logs for kube-apiserver [a48a6d310b86] ...
	I0318 12:44:43.618596    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a48a6d310b86"
	I0318 12:44:43.657706    5712 command_runner.go:130] ! I0318 12:43:26.873064       1 options.go:220] external host was not specified, using 172.25.148.129
	I0318 12:44:43.657808    5712 command_runner.go:130] ! I0318 12:43:26.879001       1 server.go:148] Version: v1.28.4
	I0318 12:44:43.657868    5712 command_runner.go:130] ! I0318 12:43:26.879883       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:43.657868    5712 command_runner.go:130] ! I0318 12:43:27.623853       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0318 12:44:43.657868    5712 command_runner.go:130] ! I0318 12:43:27.658081       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0318 12:44:43.657928    5712 command_runner.go:130] ! I0318 12:43:27.658128       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0318 12:44:43.657987    5712 command_runner.go:130] ! I0318 12:43:27.660963       1 instance.go:298] Using reconciler: lease
	I0318 12:44:43.657987    5712 command_runner.go:130] ! I0318 12:43:27.814829       1 handler.go:232] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0318 12:44:43.658018    5712 command_runner.go:130] ! W0318 12:43:27.815233       1 genericapiserver.go:744] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:43.658069    5712 command_runner.go:130] ! I0318 12:43:28.557814       1 handler.go:232] Adding GroupVersion  v1 to ResourceManager
	I0318 12:44:43.658069    5712 command_runner.go:130] ! I0318 12:43:28.558168       1 instance.go:709] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0318 12:44:43.658069    5712 command_runner.go:130] ! I0318 12:43:29.283146       1 instance.go:709] API group "resource.k8s.io" is not enabled, skipping.
	I0318 12:44:43.658121    5712 command_runner.go:130] ! I0318 12:43:29.346403       1 handler.go:232] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0318 12:44:43.658146    5712 command_runner.go:130] ! W0318 12:43:29.360856       1 genericapiserver.go:744] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:43.658146    5712 command_runner.go:130] ! W0318 12:43:29.360910       1 genericapiserver.go:744] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:43.658146    5712 command_runner.go:130] ! I0318 12:43:29.361419       1 handler.go:232] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0318 12:44:43.658217    5712 command_runner.go:130] ! W0318 12:43:29.361431       1 genericapiserver.go:744] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:43.658217    5712 command_runner.go:130] ! I0318 12:43:29.362356       1 handler.go:232] Adding GroupVersion autoscaling v2 to ResourceManager
	I0318 12:44:43.658217    5712 command_runner.go:130] ! I0318 12:43:29.365115       1 handler.go:232] Adding GroupVersion autoscaling v1 to ResourceManager
	I0318 12:44:43.658217    5712 command_runner.go:130] ! W0318 12:43:29.365134       1 genericapiserver.go:744] Skipping API autoscaling/v2beta1 because it has no resources.
	I0318 12:44:43.658277    5712 command_runner.go:130] ! W0318 12:43:29.365140       1 genericapiserver.go:744] Skipping API autoscaling/v2beta2 because it has no resources.
	I0318 12:44:43.658300    5712 command_runner.go:130] ! I0318 12:43:29.370774       1 handler.go:232] Adding GroupVersion batch v1 to ResourceManager
	I0318 12:44:43.658300    5712 command_runner.go:130] ! W0318 12:43:29.370809       1 genericapiserver.go:744] Skipping API batch/v1beta1 because it has no resources.
	I0318 12:44:43.658328    5712 command_runner.go:130] ! I0318 12:43:29.375063       1 handler.go:232] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0318 12:44:43.658328    5712 command_runner.go:130] ! W0318 12:43:29.375102       1 genericapiserver.go:744] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:43.658328    5712 command_runner.go:130] ! W0318 12:43:29.375108       1 genericapiserver.go:744] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:43.658328    5712 command_runner.go:130] ! I0318 12:43:29.375862       1 handler.go:232] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0318 12:44:43.658328    5712 command_runner.go:130] ! W0318 12:43:29.375929       1 genericapiserver.go:744] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:43.658328    5712 command_runner.go:130] ! W0318 12:43:29.375979       1 genericapiserver.go:744] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:43.658328    5712 command_runner.go:130] ! I0318 12:43:29.376693       1 handler.go:232] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0318 12:44:43.658328    5712 command_runner.go:130] ! I0318 12:43:29.384185       1 handler.go:232] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0318 12:44:43.658328    5712 command_runner.go:130] ! W0318 12:43:29.384228       1 genericapiserver.go:744] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:43.658328    5712 command_runner.go:130] ! W0318 12:43:29.384236       1 genericapiserver.go:744] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:43.658328    5712 command_runner.go:130] ! I0318 12:43:29.385110       1 handler.go:232] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0318 12:44:43.658328    5712 command_runner.go:130] ! W0318 12:43:29.385148       1 genericapiserver.go:744] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:43.658328    5712 command_runner.go:130] ! W0318 12:43:29.385155       1 genericapiserver.go:744] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:43.658328    5712 command_runner.go:130] ! I0318 12:43:29.388232       1 handler.go:232] Adding GroupVersion policy v1 to ResourceManager
	I0318 12:44:43.658328    5712 command_runner.go:130] ! W0318 12:43:29.388272       1 genericapiserver.go:744] Skipping API policy/v1beta1 because it has no resources.
	I0318 12:44:43.658328    5712 command_runner.go:130] ! I0318 12:43:29.392835       1 handler.go:232] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0318 12:44:43.658328    5712 command_runner.go:130] ! W0318 12:43:29.392872       1 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:43.658328    5712 command_runner.go:130] ! W0318 12:43:29.392880       1 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:43.658328    5712 command_runner.go:130] ! I0318 12:43:29.393504       1 handler.go:232] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0318 12:44:43.658328    5712 command_runner.go:130] ! W0318 12:43:29.393628       1 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:43.658328    5712 command_runner.go:130] ! W0318 12:43:29.393636       1 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:43.658328    5712 command_runner.go:130] ! I0318 12:43:29.401801       1 handler.go:232] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0318 12:44:43.658328    5712 command_runner.go:130] ! W0318 12:43:29.401838       1 genericapiserver.go:744] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:43.658328    5712 command_runner.go:130] ! W0318 12:43:29.401846       1 genericapiserver.go:744] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:43.658328    5712 command_runner.go:130] ! I0318 12:43:29.405508       1 handler.go:232] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0318 12:44:43.658328    5712 command_runner.go:130] ! I0318 12:43:29.409452       1 handler.go:232] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta2 to ResourceManager
	I0318 12:44:43.658328    5712 command_runner.go:130] ! W0318 12:43:29.409492       1 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:43.658328    5712 command_runner.go:130] ! W0318 12:43:29.409500       1 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:43.658328    5712 command_runner.go:130] ! I0318 12:43:29.421682       1 handler.go:232] Adding GroupVersion apps v1 to ResourceManager
	I0318 12:44:43.658328    5712 command_runner.go:130] ! W0318 12:43:29.421819       1 genericapiserver.go:744] Skipping API apps/v1beta2 because it has no resources.
	I0318 12:44:43.658328    5712 command_runner.go:130] ! W0318 12:43:29.421829       1 genericapiserver.go:744] Skipping API apps/v1beta1 because it has no resources.
	I0318 12:44:43.658870    5712 command_runner.go:130] ! I0318 12:43:29.426368       1 handler.go:232] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0318 12:44:43.658870    5712 command_runner.go:130] ! W0318 12:43:29.426405       1 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:43.658870    5712 command_runner.go:130] ! W0318 12:43:29.426413       1 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:43.658870    5712 command_runner.go:130] ! I0318 12:43:29.427337       1 handler.go:232] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0318 12:44:43.658870    5712 command_runner.go:130] ! W0318 12:43:29.427376       1 genericapiserver.go:744] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:43.658870    5712 command_runner.go:130] ! I0318 12:43:29.459555       1 handler.go:232] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0318 12:44:43.659009    5712 command_runner.go:130] ! W0318 12:43:29.459595       1 genericapiserver.go:744] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:43.659039    5712 command_runner.go:130] ! I0318 12:43:30.367734       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 12:44:43.659039    5712 command_runner.go:130] ! I0318 12:43:30.367932       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 12:44:43.659039    5712 command_runner.go:130] ! I0318 12:43:30.368782       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0318 12:44:43.659106    5712 command_runner.go:130] ! I0318 12:43:30.370542       1 secure_serving.go:213] Serving securely on [::]:8443
	I0318 12:44:43.659132    5712 command_runner.go:130] ! I0318 12:43:30.370628       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 12:44:43.659132    5712 command_runner.go:130] ! I0318 12:43:30.371667       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.372321       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.372682       1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.373559       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.373947       1 handler_discovery.go:412] Starting ResourceDiscoveryManager
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.374159       1 available_controller.go:423] Starting AvailableConditionController
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.374194       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.374404       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.374979       1 aggregator.go:164] waiting for initial CRD sync...
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.375087       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.375452       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.376837       1 controller.go:116] Starting legacy_token_tracking_controller
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.377105       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.377485       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.378013       1 controller.go:78] Starting OpenAPI AggregationController
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.378732       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.379224       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.379834       1 apf_controller.go:372] Starting API Priority and Fairness config controller
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.380470       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.380848       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.382047       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.382230       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.383964       1 controller.go:134] Starting OpenAPI controller
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.384158       1 controller.go:85] Starting OpenAPI V3 controller
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.384420       1 naming_controller.go:291] Starting NamingConditionController
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.384790       1 establishing_controller.go:76] Starting EstablishingController
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.385986       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.386163       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.386327       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.474963       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.476622       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.496736       1 shared_informer.go:318] Caches are synced for configmaps
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.497067       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.497511       1 aggregator.go:166] initial CRD sync complete...
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.498503       1 autoregister_controller.go:141] Starting autoregister controller
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.498662       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.498825       1 cache.go:39] Caches are synced for autoregister controller
	I0318 12:44:43.659161    5712 command_runner.go:130] ! I0318 12:43:30.570075       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0318 12:44:43.659695    5712 command_runner.go:130] ! I0318 12:43:30.585880       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0318 12:44:43.659695    5712 command_runner.go:130] ! I0318 12:43:30.624565       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0318 12:44:43.659695    5712 command_runner.go:130] ! I0318 12:43:30.681515       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0318 12:44:43.659695    5712 command_runner.go:130] ! I0318 12:43:30.681604       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0318 12:44:43.659695    5712 command_runner.go:130] ! I0318 12:43:31.410513       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0318 12:44:43.659787    5712 command_runner.go:130] ! W0318 12:43:31.917736       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.25.148.129 172.25.151.112]
	I0318 12:44:43.659787    5712 command_runner.go:130] ! I0318 12:43:31.919293       1 controller.go:624] quota admission added evaluator for: endpoints
	I0318 12:44:43.659787    5712 command_runner.go:130] ! I0318 12:43:31.929122       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0318 12:44:43.659787    5712 command_runner.go:130] ! I0318 12:43:34.160688       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0318 12:44:43.659787    5712 command_runner.go:130] ! I0318 12:43:34.367742       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0318 12:44:43.659864    5712 command_runner.go:130] ! I0318 12:43:34.406080       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0318 12:44:43.659890    5712 command_runner.go:130] ! I0318 12:43:34.542647       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0318 12:44:43.659890    5712 command_runner.go:130] ! I0318 12:43:34.562855       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0318 12:44:43.659919    5712 command_runner.go:130] ! W0318 12:43:51.920595       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.25.148.129]
	I0318 12:44:43.668353    5712 logs.go:123] Gathering logs for etcd [8e7911b58c58] ...
	I0318 12:44:43.668353    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7911b58c58"
	I0318 12:44:43.702589    5712 command_runner.go:130] ! {"level":"warn","ts":"2024-03-18T12:43:26.200481Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0318 12:44:43.702721    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.210029Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.25.148.129:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.25.148.129:2380","--initial-cluster=multinode-642600=https://172.25.148.129:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.25.148.129:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.25.148.129:2380","--name=multinode-642600","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0318 12:44:43.702836    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.210181Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0318 12:44:43.702836    5712 command_runner.go:130] ! {"level":"warn","ts":"2024-03-18T12:43:26.21031Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0318 12:44:43.702836    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.210331Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.25.148.129:2380"]}
	I0318 12:44:43.702979    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.210546Z","caller":"embed/etcd.go:495","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0318 12:44:43.703017    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.222773Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.25.148.129:2379"]}
	I0318 12:44:43.703138    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.228178Z","caller":"embed/etcd.go:309","msg":"starting an etcd server","etcd-version":"3.5.9","git-sha":"bdbbde998","go-version":"go1.19.9","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-642600","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.25.148.129:2380"],"listen-peer-urls":["https://172.25.148.129:2380"],"advertise-client-urls":["https://172.25.148.129:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.25.148.129:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"init
ial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0318 12:44:43.703167    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.271498Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"41.739133ms"}
	I0318 12:44:43.703167    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.299465Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0318 12:44:43.703167    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.319578Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"31713adf8492fbc4","local-member-id":"78764271becab2d0","commit-index":2138}
	I0318 12:44:43.703167    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.319995Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 switched to configuration voters=()"}
	I0318 12:44:43.703167    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.320107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 became follower at term 2"}
	I0318 12:44:43.703167    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.320138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 78764271becab2d0 [peers: [], term: 2, commit: 2138, applied: 0, lastindex: 2138, lastterm: 2]"}
	I0318 12:44:43.703167    5712 command_runner.go:130] ! {"level":"warn","ts":"2024-03-18T12:43:26.325366Z","caller":"auth/store.go:1238","msg":"simple token is not cryptographically signed"}
	I0318 12:44:43.703167    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.329191Z","caller":"mvcc/kvstore.go:323","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1388}
	I0318 12:44:43.703167    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.333388Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":1848}
	I0318 12:44:43.703167    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.357951Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0318 12:44:43.703167    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.372436Z","caller":"etcdserver/corrupt.go:95","msg":"starting initial corruption check","local-member-id":"78764271becab2d0","timeout":"7s"}
	I0318 12:44:43.703167    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.373126Z","caller":"etcdserver/corrupt.go:165","msg":"initial corruption checking passed; no corruption","local-member-id":"78764271becab2d0"}
	I0318 12:44:43.703167    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.373252Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"78764271becab2d0","local-server-version":"3.5.9","cluster-version":"to_be_decided"}
	I0318 12:44:43.703167    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.373688Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	I0318 12:44:43.703167    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.375391Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0318 12:44:43.703167    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.375647Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0318 12:44:43.703167    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.375735Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0318 12:44:43.703167    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.377469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 switched to configuration voters=(8680198388102902480)"}
	I0318 12:44:43.703167    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.377568Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"31713adf8492fbc4","local-member-id":"78764271becab2d0","added-peer-id":"78764271becab2d0","added-peer-peer-urls":["https://172.25.151.112:2380"]}
	I0318 12:44:43.703167    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.378749Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"31713adf8492fbc4","local-member-id":"78764271becab2d0","cluster-version":"3.5"}
	I0318 12:44:43.703167    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.378942Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0318 12:44:43.703712    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.380244Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0318 12:44:43.703756    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.380886Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"78764271becab2d0","initial-advertise-peer-urls":["https://172.25.148.129:2380"],"listen-peer-urls":["https://172.25.148.129:2380"],"advertise-client-urls":["https://172.25.148.129:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.25.148.129:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0318 12:44:43.703756    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.383141Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.25.148.129:2380"}
	I0318 12:44:43.703886    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.383279Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.25.148.129:2380"}
	I0318 12:44:43.703886    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.393018Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0318 12:44:43.703916    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.621966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 is starting a new election at term 2"}
	I0318 12:44:43.703916    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.622399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 became pre-candidate at term 2"}
	I0318 12:44:43.704057    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.622624Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 received MsgPreVoteResp from 78764271becab2d0 at term 2"}
	I0318 12:44:43.704057    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.622825Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 became candidate at term 3"}
	I0318 12:44:43.704057    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.624231Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 received MsgVoteResp from 78764271becab2d0 at term 3"}
	I0318 12:44:43.704057    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.624426Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 became leader at term 3"}
	I0318 12:44:43.704117    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.624696Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 78764271becab2d0 elected leader 78764271becab2d0 at term 3"}
	I0318 12:44:43.704117    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.641347Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"78764271becab2d0","local-member-attributes":"{Name:multinode-642600 ClientURLs:[https://172.25.148.129:2379]}","request-path":"/0/members/78764271becab2d0/attributes","cluster-id":"31713adf8492fbc4","publish-timeout":"7s"}
	I0318 12:44:43.704117    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.641882Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0318 12:44:43.704117    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.64409Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0318 12:44:43.704117    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.644373Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0318 12:44:43.704117    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.641995Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0318 12:44:43.704117    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.650212Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.25.148.129:2379"}
	I0318 12:44:43.704117    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.651053Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0318 12:44:43.711156    5712 logs.go:123] Gathering logs for kube-controller-manager [14ae9398d33b] ...
	I0318 12:44:43.711212    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ae9398d33b"
	I0318 12:44:43.742072    5712 command_runner.go:130] ! I0318 12:43:27.406049       1 serving.go:348] Generated self-signed cert in-memory
	I0318 12:44:43.742072    5712 command_runner.go:130] ! I0318 12:43:29.733819       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0318 12:44:43.742881    5712 command_runner.go:130] ! I0318 12:43:29.734137       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:43.743071    5712 command_runner.go:130] ! I0318 12:43:29.737351       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 12:44:43.743071    5712 command_runner.go:130] ! I0318 12:43:29.737598       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 12:44:43.743071    5712 command_runner.go:130] ! I0318 12:43:29.739365       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0318 12:44:43.743071    5712 command_runner.go:130] ! I0318 12:43:29.740428       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 12:44:43.743071    5712 command_runner.go:130] ! I0318 12:43:32.581261       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0318 12:44:43.743071    5712 command_runner.go:130] ! I0318 12:43:32.597867       1 controllermanager.go:642] "Started controller" controller="persistentvolume-binder-controller"
	I0318 12:44:43.743163    5712 command_runner.go:130] ! I0318 12:43:32.602078       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0318 12:44:43.743163    5712 command_runner.go:130] ! I0318 12:43:32.602099       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0318 12:44:43.743163    5712 command_runner.go:130] ! I0318 12:43:32.605600       1 controllermanager.go:642] "Started controller" controller="persistentvolume-expander-controller"
	I0318 12:44:43.743263    5712 command_runner.go:130] ! I0318 12:43:32.605807       1 expand_controller.go:328] "Starting expand controller"
	I0318 12:44:43.744074    5712 command_runner.go:130] ! I0318 12:43:32.605957       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0318 12:44:43.744565    5712 command_runner.go:130] ! I0318 12:43:32.620725       1 controllermanager.go:642] "Started controller" controller="persistentvolume-protection-controller"
	I0318 12:44:43.744565    5712 command_runner.go:130] ! I0318 12:43:32.621286       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0318 12:44:43.744565    5712 command_runner.go:130] ! I0318 12:43:32.621374       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0318 12:44:43.744565    5712 command_runner.go:130] ! I0318 12:43:32.663010       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I0318 12:44:43.744565    5712 command_runner.go:130] ! I0318 12:43:32.663383       1 namespace_controller.go:197] "Starting namespace controller"
	I0318 12:44:43.744678    5712 command_runner.go:130] ! I0318 12:43:32.663451       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0318 12:44:43.744678    5712 command_runner.go:130] ! I0318 12:43:32.674431       1 controllermanager.go:642] "Started controller" controller="deployment-controller"
	I0318 12:44:43.744678    5712 command_runner.go:130] ! I0318 12:43:32.675030       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0318 12:44:43.744678    5712 command_runner.go:130] ! I0318 12:43:32.675045       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0318 12:44:43.744678    5712 command_runner.go:130] ! I0318 12:43:32.680220       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0318 12:44:43.744678    5712 command_runner.go:130] ! I0318 12:43:32.680236       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0318 12:44:43.744678    5712 command_runner.go:130] ! I0318 12:43:32.680266       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:43.744678    5712 command_runner.go:130] ! I0318 12:43:32.681919       1 shared_informer.go:318] Caches are synced for tokens
	I0318 12:44:43.744678    5712 command_runner.go:130] ! I0318 12:43:32.684132       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0318 12:44:43.744678    5712 command_runner.go:130] ! I0318 12:43:32.684147       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0318 12:44:43.744678    5712 command_runner.go:130] ! I0318 12:43:32.684164       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:43.744678    5712 command_runner.go:130] ! I0318 12:43:32.685811       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0318 12:44:43.744678    5712 command_runner.go:130] ! I0318 12:43:32.685845       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0318 12:44:43.744678    5712 command_runner.go:130] ! I0318 12:43:32.686123       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:43.744678    5712 command_runner.go:130] ! I0318 12:43:32.687526       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0318 12:44:43.744678    5712 command_runner.go:130] ! I0318 12:43:32.687845       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0318 12:44:43.744678    5712 command_runner.go:130] ! I0318 12:43:32.687858       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0318 12:44:43.744678    5712 command_runner.go:130] ! I0318 12:43:32.687918       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:43.744678    5712 command_runner.go:130] ! I0318 12:43:32.691958       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0318 12:44:43.744678    5712 command_runner.go:130] ! I0318 12:43:32.692673       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0318 12:44:43.744678    5712 command_runner.go:130] ! I0318 12:43:32.696192       1 controllermanager.go:642] "Started controller" controller="bootstrap-signer-controller"
	I0318 12:44:43.744678    5712 command_runner.go:130] ! I0318 12:43:32.696622       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0318 12:44:43.745228    5712 command_runner.go:130] ! I0318 12:43:32.701031       1 controllermanager.go:642] "Started controller" controller="token-cleaner-controller"
	I0318 12:44:43.745228    5712 command_runner.go:130] ! I0318 12:43:32.701415       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0318 12:44:43.745293    5712 command_runner.go:130] ! I0318 12:43:32.701449       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0318 12:44:43.745293    5712 command_runner.go:130] ! I0318 12:43:32.701458       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0318 12:44:43.745293    5712 command_runner.go:130] ! E0318 12:43:32.705162       1 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0318 12:44:43.745293    5712 command_runner.go:130] ! I0318 12:43:32.705349       1 controllermanager.go:620] "Warning: skipping controller" controller="service-lb-controller"
	I0318 12:44:43.745400    5712 command_runner.go:130] ! I0318 12:43:32.705364       1 core.go:228] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0318 12:44:43.745400    5712 command_runner.go:130] ! I0318 12:43:32.705376       1 controllermanager.go:620] "Warning: skipping controller" controller="node-route-controller"
	I0318 12:44:43.745400    5712 command_runner.go:130] ! I0318 12:43:32.750736       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0318 12:44:43.745463    5712 command_runner.go:130] ! I0318 12:43:32.751361       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0318 12:44:43.745486    5712 command_runner.go:130] ! W0318 12:43:32.751515       1 shared_informer.go:593] resyncPeriod 19h34m1.540802039s is smaller than resyncCheckPeriod 20h12m46.622656472s and the informer has already started. Changing it to 20h12m46.622656472s
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.752012       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.752529       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.752719       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.752884       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.753191       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.753284       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.753677       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.753791       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.753884       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.754036       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.754202       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.754691       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.755001       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.755205       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.755784       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.755974       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.756144       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.756649       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.756826       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.757119       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.757364       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.757580       1 controllermanager.go:642] "Started controller" controller="resourcequota-controller"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! E0318 12:43:32.773718       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.773746       1 controllermanager.go:620] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.786590       1 controllermanager.go:642] "Started controller" controller="ephemeral-volume-controller"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.786978       1 controller.go:169] "Starting ephemeral volume controller"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.787007       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.795770       1 controllermanager.go:642] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.798452       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.798585       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.801712       1 controllermanager.go:642] "Started controller" controller="serviceaccount-controller"
	I0318 12:44:43.745529    5712 command_runner.go:130] ! I0318 12:43:32.802261       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0318 12:44:43.746057    5712 command_runner.go:130] ! I0318 12:43:32.806063       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0318 12:44:43.746057    5712 command_runner.go:130] ! I0318 12:43:32.823560       1 controllermanager.go:642] "Started controller" controller="garbage-collector-controller"
	I0318 12:44:43.746057    5712 command_runner.go:130] ! I0318 12:43:32.823578       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0318 12:44:43.746121    5712 command_runner.go:130] ! I0318 12:43:32.823595       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 12:44:43.746121    5712 command_runner.go:130] ! I0318 12:43:32.823621       1 graph_builder.go:294] "Running" component="GraphBuilder"
	I0318 12:44:43.746121    5712 command_runner.go:130] ! I0318 12:43:32.833033       1 controllermanager.go:642] "Started controller" controller="daemonset-controller"
	I0318 12:44:43.746121    5712 command_runner.go:130] ! I0318 12:43:32.833480       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0318 12:44:43.746121    5712 command_runner.go:130] ! I0318 12:43:32.833494       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0318 12:44:43.746121    5712 command_runner.go:130] ! I0318 12:43:32.862160       1 node_lifecycle_controller.go:431] "Controller will reconcile labels"
	I0318 12:44:43.746212    5712 command_runner.go:130] ! I0318 12:43:32.862209       1 controllermanager.go:642] "Started controller" controller="node-lifecycle-controller"
	I0318 12:44:43.746212    5712 command_runner.go:130] ! I0318 12:43:32.862524       1 node_lifecycle_controller.go:465] "Sending events to api server"
	I0318 12:44:43.746212    5712 command_runner.go:130] ! I0318 12:43:32.862562       1 node_lifecycle_controller.go:476] "Starting node controller"
	I0318 12:44:43.746284    5712 command_runner.go:130] ! I0318 12:43:32.862573       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0318 12:44:43.746303    5712 command_runner.go:130] ! I0318 12:43:32.883369       1 controllermanager.go:642] "Started controller" controller="endpointslice-mirroring-controller"
	I0318 12:44:43.746303    5712 command_runner.go:130] ! I0318 12:43:32.886141       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0318 12:44:43.746303    5712 command_runner.go:130] ! I0318 12:43:32.886674       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0318 12:44:43.746303    5712 command_runner.go:130] ! I0318 12:43:32.896468       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I0318 12:44:43.746369    5712 command_runner.go:130] ! I0318 12:43:32.896951       1 stateful_set.go:161] "Starting stateful set controller"
	I0318 12:44:43.746369    5712 command_runner.go:130] ! I0318 12:43:32.897135       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0318 12:44:43.746369    5712 command_runner.go:130] ! I0318 12:43:32.900325       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0318 12:44:43.746369    5712 command_runner.go:130] ! I0318 12:43:32.900580       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0318 12:44:43.746369    5712 command_runner.go:130] ! I0318 12:43:32.903531       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0318 12:44:43.746369    5712 command_runner.go:130] ! I0318 12:43:32.917793       1 controllermanager.go:642] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0318 12:44:43.746369    5712 command_runner.go:130] ! I0318 12:43:32.918152       1 horizontal.go:200] "Starting HPA controller"
	I0318 12:44:43.746369    5712 command_runner.go:130] ! I0318 12:43:32.918638       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0318 12:44:43.746369    5712 command_runner.go:130] ! I0318 12:43:32.920489       1 controllermanager.go:642] "Started controller" controller="pod-garbage-collector-controller"
	I0318 12:44:43.746369    5712 command_runner.go:130] ! I0318 12:43:32.920802       1 gc_controller.go:101] "Starting GC controller"
	I0318 12:44:43.746369    5712 command_runner.go:130] ! I0318 12:43:32.922940       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0318 12:44:43.746369    5712 command_runner.go:130] ! I0318 12:43:32.923834       1 controllermanager.go:642] "Started controller" controller="endpointslice-controller"
	I0318 12:44:43.746529    5712 command_runner.go:130] ! I0318 12:43:32.924143       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0318 12:44:43.746529    5712 command_runner.go:130] ! I0318 12:43:32.924461       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0318 12:44:43.746529    5712 command_runner.go:130] ! I0318 12:43:32.935394       1 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
	I0318 12:44:43.746529    5712 command_runner.go:130] ! I0318 12:43:32.935610       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0318 12:44:43.746529    5712 command_runner.go:130] ! I0318 12:43:32.935623       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0318 12:44:43.746603    5712 command_runner.go:130] ! I0318 12:43:32.996434       1 controllermanager.go:642] "Started controller" controller="job-controller"
	I0318 12:44:43.746653    5712 command_runner.go:130] ! I0318 12:43:32.996586       1 job_controller.go:226] "Starting job controller"
	I0318 12:44:43.746670    5712 command_runner.go:130] ! I0318 12:43:32.996666       1 shared_informer.go:311] Waiting for caches to sync for job
	I0318 12:44:43.746670    5712 command_runner.go:130] ! I0318 12:43:33.085354       1 controllermanager.go:642] "Started controller" controller="disruption-controller"
	I0318 12:44:43.746670    5712 command_runner.go:130] ! I0318 12:43:33.086157       1 disruption.go:433] "Sending events to api server."
	I0318 12:44:43.746670    5712 command_runner.go:130] ! I0318 12:43:33.086235       1 disruption.go:444] "Starting disruption controller"
	I0318 12:44:43.746727    5712 command_runner.go:130] ! I0318 12:43:33.086245       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0318 12:44:43.746751    5712 command_runner.go:130] ! I0318 12:43:33.141477       1 controllermanager.go:642] "Started controller" controller="cronjob-controller"
	I0318 12:44:43.746751    5712 command_runner.go:130] ! I0318 12:43:33.142359       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0318 12:44:43.746751    5712 command_runner.go:130] ! I0318 12:43:33.142566       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0318 12:44:43.746751    5712 command_runner.go:130] ! I0318 12:43:33.186973       1 controllermanager.go:642] "Started controller" controller="endpoints-controller"
	I0318 12:44:43.746751    5712 command_runner.go:130] ! I0318 12:43:33.187335       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0318 12:44:43.746751    5712 command_runner.go:130] ! I0318 12:43:33.187410       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0318 12:44:43.746751    5712 command_runner.go:130] ! I0318 12:43:33.236517       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0318 12:44:43.746751    5712 command_runner.go:130] ! I0318 12:43:33.236982       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0318 12:44:43.746866    5712 command_runner.go:130] ! I0318 12:43:33.237471       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0318 12:44:43.746866    5712 command_runner.go:130] ! I0318 12:43:33.286539       1 controllermanager.go:642] "Started controller" controller="ttl-controller"
	I0318 12:44:43.746866    5712 command_runner.go:130] ! I0318 12:43:33.287154       1 ttl_controller.go:124] "Starting TTL controller"
	I0318 12:44:43.746866    5712 command_runner.go:130] ! I0318 12:43:33.287375       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0318 12:44:43.746933    5712 command_runner.go:130] ! I0318 12:43:43.355688       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0318 12:44:43.746957    5712 command_runner.go:130] ! I0318 12:43:43.355845       1 controllermanager.go:642] "Started controller" controller="node-ipam-controller"
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.356879       1 node_ipam_controller.go:162] "Starting ipam controller"
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.357033       1 shared_informer.go:311] Waiting for caches to sync for node
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.359716       1 controllermanager.go:642] "Started controller" controller="clusterrole-aggregation-controller"
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.361043       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.361062       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.364706       1 controllermanager.go:642] "Started controller" controller="ttl-after-finished-controller"
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.364861       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.364989       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.369492       1 controllermanager.go:642] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.369675       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.369706       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.375944       1 controllermanager.go:642] "Started controller" controller="replicaset-controller"
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.376145       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.377600       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.390058       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.405940       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600\" does not exist"
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.408115       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.408433       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.408623       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600-m02\" does not exist"
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.408708       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600-m03\" does not exist"
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.408817       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.421506       1 shared_informer.go:318] Caches are synced for PV protection
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.446678       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.459596       1 shared_informer.go:318] Caches are synced for node
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.459833       1 range_allocator.go:174] "Sending events to api server"
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.460258       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0318 12:44:43.746985    5712 command_runner.go:130] ! I0318 12:43:43.460829       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0318 12:44:43.747521    5712 command_runner.go:130] ! I0318 12:43:43.461091       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0318 12:44:43.747521    5712 command_runner.go:130] ! I0318 12:43:43.461418       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0318 12:44:43.747521    5712 command_runner.go:130] ! I0318 12:43:43.463618       1 shared_informer.go:318] Caches are synced for namespace
	I0318 12:44:43.747566    5712 command_runner.go:130] ! I0318 12:43:43.466097       1 shared_informer.go:318] Caches are synced for taint
	I0318 12:44:43.747566    5712 command_runner.go:130] ! I0318 12:43:43.466427       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0318 12:44:43.747566    5712 command_runner.go:130] ! I0318 12:43:43.466639       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0318 12:44:43.747566    5712 command_runner.go:130] ! I0318 12:43:43.466863       1 taint_manager.go:210] "Sending events to api server"
	I0318 12:44:43.747566    5712 command_runner.go:130] ! I0318 12:43:43.468821       1 event.go:307] "Event occurred" object="multinode-642600" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600 event: Registered Node multinode-642600 in Controller"
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.469328       1 event.go:307] "Event occurred" object="multinode-642600-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600-m02 event: Registered Node multinode-642600-m02 in Controller"
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.469579       1 event.go:307] "Event occurred" object="multinode-642600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600-m03 event: Registered Node multinode-642600-m03 in Controller"
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.469959       1 shared_informer.go:318] Caches are synced for crt configmap
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.477268       1 shared_informer.go:318] Caches are synced for deployment
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.486297       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.487082       1 shared_informer.go:318] Caches are synced for ephemeral
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.487171       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.487768       1 shared_informer.go:318] Caches are synced for TTL
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.487848       1 shared_informer.go:318] Caches are synced for endpoint
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.489265       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.497682       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.498610       1 shared_informer.go:318] Caches are synced for stateful set
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.498725       1 shared_informer.go:318] Caches are synced for attach detach
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.501123       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-642600"
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.503362       1 shared_informer.go:318] Caches are synced for persistent volume
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.505991       1 shared_informer.go:318] Caches are synced for expand
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.503938       1 shared_informer.go:318] Caches are synced for PVC protection
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.506104       1 shared_informer.go:318] Caches are synced for service account
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.505782       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-642600-m02"
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.505818       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-642600-m03"
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.506356       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.521010       1 shared_informer.go:318] Caches are synced for HPA
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.524230       1 shared_informer.go:318] Caches are synced for GC
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.527081       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.534422       1 shared_informer.go:318] Caches are synced for daemon sets
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.537721       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.545260       1 shared_informer.go:318] Caches are synced for cronjob
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.546769       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="57.454588ms"
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.547853       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="57.476888ms"
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.552128       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="66µs"
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.552429       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="130.199µs"
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.565701       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.580927       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.585098       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.586663       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.590461       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.597830       1 shared_informer.go:318] Caches are synced for job
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.635734       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.658493       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:43.686534       1 shared_informer.go:318] Caches are synced for disruption
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:44.024395       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:44.024760       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:43:44.048280       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:44:11.303411       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:44:13.533509       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-48qkw" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-48qkw"
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:44:13.534203       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68-fgn7v" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-5dd5756b68-fgn7v"
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:44:13.534478       1 event.go:307] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:44:23.562573       1 event.go:307] "Event occurred" object="multinode-642600-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-642600-m02 status is now: NodeNotReady"
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:44:23.591486       1 event.go:307] "Event occurred" object="kube-system/kindnet-d5llj" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:44:23.614671       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-vts9f" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:44:23.639496       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-hmhdf" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:43.747677    5712 command_runner.go:130] ! I0318 12:44:23.661949       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="21.740356ms"
	I0318 12:44:43.748571    5712 command_runner.go:130] ! I0318 12:44:23.663289       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="50.499µs"
	I0318 12:44:43.748571    5712 command_runner.go:130] ! I0318 12:44:37.149797       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="76.1µs"
	I0318 12:44:43.748571    5712 command_runner.go:130] ! I0318 12:44:37.209300       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="28.125704ms"
	I0318 12:44:43.748571    5712 command_runner.go:130] ! I0318 12:44:37.209415       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="54.4µs"
	I0318 12:44:43.748571    5712 command_runner.go:130] ! I0318 12:44:37.245284       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="23.227968ms"
	I0318 12:44:43.748571    5712 command_runner.go:130] ! I0318 12:44:37.254358       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="3.872028ms"
	I0318 12:44:43.762553    5712 logs.go:123] Gathering logs for kube-controller-manager [a54be4436901] ...
	I0318 12:44:43.762553    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54be4436901"
	I0318 12:44:43.792568    5712 command_runner.go:130] ! I0318 12:18:43.818653       1 serving.go:348] Generated self-signed cert in-memory
	I0318 12:44:43.792568    5712 command_runner.go:130] ! I0318 12:18:45.050029       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0318 12:44:43.792568    5712 command_runner.go:130] ! I0318 12:18:45.050365       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:43.792568    5712 command_runner.go:130] ! I0318 12:18:45.053707       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0318 12:44:43.792568    5712 command_runner.go:130] ! I0318 12:18:45.056733       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 12:44:43.792568    5712 command_runner.go:130] ! I0318 12:18:45.057073       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 12:44:43.792568    5712 command_runner.go:130] ! I0318 12:18:45.057232       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 12:44:43.792568    5712 command_runner.go:130] ! I0318 12:18:49.569825       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0318 12:44:43.792568    5712 command_runner.go:130] ! I0318 12:18:49.602388       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0318 12:44:43.792568    5712 command_runner.go:130] ! I0318 12:18:49.603663       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.603680       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.621364       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.621624       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.621432       1 controllermanager.go:642] "Started controller" controller="garbage-collector-controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.622281       1 graph_builder.go:294] "Running" component="GraphBuilder"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.644362       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.644758       1 stateful_set.go:161] "Starting stateful set controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.646607       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.660400       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.661053       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.670023       1 shared_informer.go:318] Caches are synced for tokens
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.679784       1 controllermanager.go:642] "Started controller" controller="persistentvolume-expander-controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.680015       1 expand_controller.go:328] "Starting expand controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.680028       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.692925       1 controllermanager.go:642] "Started controller" controller="clusterrole-aggregation-controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.693164       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.693449       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.727464       1 controllermanager.go:642] "Started controller" controller="serviceaccount-controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.727835       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.727848       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.742409       1 controllermanager.go:642] "Started controller" controller="disruption-controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.743029       1 disruption.go:433] "Sending events to api server."
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.743301       1 disruption.go:444] "Starting disruption controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.743449       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.759716       1 controllermanager.go:642] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.760338       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.760376       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.829809       1 controllermanager.go:642] "Started controller" controller="endpointslice-mirroring-controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.830343       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:49.830415       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:50.085725       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:50.086016       1 namespace_controller.go:197] "Starting namespace controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:50.086167       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:50.234974       1 controllermanager.go:642] "Started controller" controller="replicaset-controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:50.242121       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:50.242138       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:50.384031       1 controllermanager.go:642] "Started controller" controller="token-cleaner-controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:50.384090       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:50.384100       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:50.384108       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:50.530182       1 controllermanager.go:642] "Started controller" controller="persistentvolume-protection-controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:50.530258       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:50.530267       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:50.695232       1 controllermanager.go:642] "Started controller" controller="job-controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:50.695351       1 job_controller.go:226] "Starting job controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:50.695361       1 shared_informer.go:311] Waiting for caches to sync for job
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:50.833418       1 controllermanager.go:642] "Started controller" controller="deployment-controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:50.833674       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:50.833686       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:50.998838       1 controllermanager.go:642] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:50.999193       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:50.999227       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:51.141445       1 controllermanager.go:642] "Started controller" controller="ttl-after-finished-controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:51.141508       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:51.141518       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:51.279642       1 controllermanager.go:642] "Started controller" controller="pod-garbage-collector-controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:51.279728       1 gc_controller.go:101] "Starting GC controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:51.279742       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:51.429394       1 controllermanager.go:642] "Started controller" controller="cronjob-controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:51.429600       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:18:51.429612       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:19:01.598915       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:19:01.598966       1 controllermanager.go:642] "Started controller" controller="node-ipam-controller"
	I0318 12:44:43.793561    5712 command_runner.go:130] ! I0318 12:19:01.599163       1 node_ipam_controller.go:162] "Starting ipam controller"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.599174       1 shared_informer.go:311] Waiting for caches to sync for node
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.601488       1 node_lifecycle_controller.go:431] "Controller will reconcile labels"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.601803       1 controllermanager.go:642] "Started controller" controller="node-lifecycle-controller"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.601987       1 node_lifecycle_controller.go:465] "Sending events to api server"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.602013       1 node_lifecycle_controller.go:476] "Starting node controller"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.602019       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.623744       1 controllermanager.go:642] "Started controller" controller="persistentvolume-binder-controller"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.624435       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.624966       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.663430       1 controllermanager.go:642] "Started controller" controller="endpointslice-controller"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.663839       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.663858       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.710104       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.710384       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.710455       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.710487       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.710760       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.710795       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.710822       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.710886       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.710930       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.710986       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.711095       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.711137       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.711160       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.711179       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.711211       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.711237       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.711261       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.711286       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.711316       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.711339       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.711356       1 controllermanager.go:642] "Started controller" controller="resourcequota-controller"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.711486       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.711654       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.711784       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.715155       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.715586       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.715886       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.732340       1 controllermanager.go:642] "Started controller" controller="ttl-controller"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.732695       1 ttl_controller.go:124] "Starting TTL controller"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.732944       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.747011       1 controllermanager.go:642] "Started controller" controller="endpoints-controller"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.747361       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.747484       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0318 12:44:43.794564    5712 command_runner.go:130] ! E0318 12:19:01.771424       1 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.771527       1 controllermanager.go:620] "Warning: skipping controller" controller="service-lb-controller"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.771544       1 core.go:228] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0318 12:44:43.794564    5712 command_runner.go:130] ! I0318 12:19:01.772072       1 controllermanager.go:620] "Warning: skipping controller" controller="node-route-controller"
	I0318 12:44:43.794564    5712 command_runner.go:130] ! E0318 12:19:01.775461       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:01.775656       1 controllermanager.go:620] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:01.788795       1 controllermanager.go:642] "Started controller" controller="ephemeral-volume-controller"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:01.789335       1 controller.go:169] "Starting ephemeral volume controller"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:01.789368       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:01.809091       1 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:01.809368       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:01.809720       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:01.846190       1 controllermanager.go:642] "Started controller" controller="daemonset-controller"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:01.846779       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:01.846879       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.137994       1 controllermanager.go:642] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.138059       1 horizontal.go:200] "Starting HPA controller"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.138069       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.189502       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.189864       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.190041       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.191172       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.191256       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.191347       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.193057       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.193152       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.193246       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.194807       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.194851       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.195648       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.194886       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.345061       1 controllermanager.go:642] "Started controller" controller="bootstrap-signer-controller"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.347311       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.364524       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.380069       1 shared_informer.go:318] Caches are synced for expand
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.390503       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.391317       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.393201       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.402532       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.419971       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.421082       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600\" does not exist"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.427201       1 shared_informer.go:318] Caches are synced for persistent volume
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.427876       1 shared_informer.go:318] Caches are synced for service account
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.429003       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.429629       1 shared_informer.go:318] Caches are synced for cronjob
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.430311       1 shared_informer.go:318] Caches are synced for PV protection
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.432115       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.434603       1 shared_informer.go:318] Caches are synced for TTL
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.437362       1 shared_informer.go:318] Caches are synced for deployment
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.438306       1 shared_informer.go:318] Caches are synced for HPA
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.441785       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.442916       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.444302       1 shared_informer.go:318] Caches are synced for disruption
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.447137       1 shared_informer.go:318] Caches are synced for daemon sets
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.447694       1 shared_informer.go:318] Caches are synced for endpoint
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.452098       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.454023       1 shared_informer.go:318] Caches are synced for stateful set
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.461158       1 shared_informer.go:318] Caches are synced for crt configmap
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.464623       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.480847       1 shared_informer.go:318] Caches are synced for GC
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.487772       1 shared_informer.go:318] Caches are synced for namespace
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.490082       1 shared_informer.go:318] Caches are synced for ephemeral
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.494160       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.499312       1 shared_informer.go:318] Caches are synced for node
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.499587       1 range_allocator.go:174] "Sending events to api server"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.499772       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.500365       1 shared_informer.go:318] Caches are synced for attach detach
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.500954       1 shared_informer.go:318] Caches are synced for job
	I0318 12:44:43.795565    5712 command_runner.go:130] ! I0318 12:19:02.501438       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0318 12:44:43.796565    5712 command_runner.go:130] ! I0318 12:19:02.501724       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0318 12:44:43.796565    5712 command_runner.go:130] ! I0318 12:19:02.503931       1 shared_informer.go:318] Caches are synced for PVC protection
	I0318 12:44:43.796565    5712 command_runner.go:130] ! I0318 12:19:02.509883       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0318 12:44:43.796565    5712 command_runner.go:130] ! I0318 12:19:02.528934       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-642600" podCIDRs=["10.244.0.0/24"]
	I0318 12:44:43.796565    5712 command_runner.go:130] ! I0318 12:19:02.565942       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 12:44:43.796565    5712 command_runner.go:130] ! I0318 12:19:02.603468       1 shared_informer.go:318] Caches are synced for taint
	I0318 12:44:43.796565    5712 command_runner.go:130] ! I0318 12:19:02.603627       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0318 12:44:43.796565    5712 command_runner.go:130] ! I0318 12:19:02.603721       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-642600"
	I0318 12:44:43.796565    5712 command_runner.go:130] ! I0318 12:19:02.603760       1 node_lifecycle_controller.go:1029] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0318 12:44:43.796565    5712 command_runner.go:130] ! I0318 12:19:02.603782       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0318 12:44:43.796565    5712 command_runner.go:130] ! I0318 12:19:02.603821       1 taint_manager.go:210] "Sending events to api server"
	I0318 12:44:43.796565    5712 command_runner.go:130] ! I0318 12:19:02.605481       1 event.go:307] "Event occurred" object="multinode-642600" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600 event: Registered Node multinode-642600 in Controller"
	I0318 12:44:43.796565    5712 command_runner.go:130] ! I0318 12:19:02.613688       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 12:44:43.796565    5712 command_runner.go:130] ! I0318 12:19:02.644197       1 event.go:307] "Event occurred" object="kube-system/etcd-multinode-642600" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:02.675188       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-multinode-642600" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:02.675510       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-multinode-642600" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:02.681286       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-multinode-642600" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:03.023915       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:03.023946       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:03.029139       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:03.075135       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:03.175071       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-kpt4f"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:03.181384       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-4dg79"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:03.624405       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-fgn7v"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:03.691902       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-xkgdt"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:03.810454       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="734.97569ms"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:03.847906       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="37.087083ms"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:03.945758       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="97.729709ms"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:03.945958       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="107.501µs"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:04.640409       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:04.732241       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-xkgdt"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:04.763359       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="121.567183ms"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:04.828298       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="64.870031ms"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:04.890459       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.083804ms"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:04.890764       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.4µs"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:15.938090       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="157.9µs"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:15.982953       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="121.301µs"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:17.607464       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:19.208242       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="102.7µs"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:19.274086       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="22.124146ms"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:19:19.275145       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="211.9µs"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:22:12.652722       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600-m02\" does not exist"
	I0318 12:44:43.797557    5712 command_runner.go:130] ! I0318 12:22:12.679760       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-642600-m02" podCIDRs=["10.244.1.0/24"]
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:22:12.706735       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-d5llj"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:22:12.706774       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-vts9f"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:22:17.642129       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-642600-m02"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:22:17.642212       1 event.go:307] "Event occurred" object="multinode-642600-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600-m02 event: Registered Node multinode-642600-m02 in Controller"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:22:34.263318       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:23:01.851486       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5b5d89c9d6 to 2"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:23:01.881281       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-hmhdf"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:23:01.924301       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-48qkw"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:23:01.946058       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="91.676064ms"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:23:02.049702       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="103.251772ms"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:23:02.049789       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="35.4µs"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:23:04.783277       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="15.030749ms"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:23:04.783520       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="39.9µs"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:23:05.441638       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="14.350047ms"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:23:05.441876       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="105µs"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:27:09.073772       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600-m03\" does not exist"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:27:09.075345       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:27:09.095707       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-642600-m03" podCIDRs=["10.244.2.0/24"]
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:27:09.110695       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-khbjt"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:27:09.110730       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-thkjp"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:27:12.715112       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-642600-m03"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:27:12.715611       1 event.go:307] "Event occurred" object="multinode-642600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600-m03 event: Registered Node multinode-642600-m03 in Controller"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:27:30.856729       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:35:52.853028       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:35:52.854041       1 event.go:307] "Event occurred" object="multinode-642600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-642600-m03 status is now: NodeNotReady"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:35:52.871920       1 event.go:307] "Event occurred" object="kube-system/kindnet-thkjp" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:35:52.891158       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-khbjt" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:38:40.101072       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:43.798556    5712 command_runner.go:130] ! I0318 12:38:42.930337       1 event.go:307] "Event occurred" object="multinode-642600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-642600-m03 event: Removing Node multinode-642600-m03 from Controller"
	I0318 12:44:43.799601    5712 command_runner.go:130] ! I0318 12:38:46.825246       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:43.799601    5712 command_runner.go:130] ! I0318 12:38:46.827225       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600-m03\" does not exist"
	I0318 12:44:43.799601    5712 command_runner.go:130] ! I0318 12:38:46.865011       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-642600-m03" podCIDRs=["10.244.3.0/24"]
	I0318 12:44:43.799601    5712 command_runner.go:130] ! I0318 12:38:47.931681       1 event.go:307] "Event occurred" object="multinode-642600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600-m03 event: Registered Node multinode-642600-m03 in Controller"
	I0318 12:44:43.799601    5712 command_runner.go:130] ! I0318 12:38:52.975724       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:43.799601    5712 command_runner.go:130] ! I0318 12:40:33.280094       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:43.799601    5712 command_runner.go:130] ! I0318 12:40:33.281180       1 event.go:307] "Event occurred" object="multinode-642600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-642600-m03 status is now: NodeNotReady"
	I0318 12:44:43.799601    5712 command_runner.go:130] ! I0318 12:40:33.601041       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-khbjt" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:43.799601    5712 command_runner.go:130] ! I0318 12:40:33.698293       1 event.go:307] "Event occurred" object="kube-system/kindnet-thkjp" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:43.817560    5712 logs.go:123] Gathering logs for coredns [fcf17db92b35] ...
	I0318 12:44:43.817560    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcf17db92b35"
	I0318 12:44:43.846573    5712 command_runner.go:130] > .:53
	I0318 12:44:43.847044    5712 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 07d6393480c36cc6b464d3853a5e32028517fcba50e93adef34ce624ca099b3a1e269a86e99bf5086a15610de9e11b2980c233f8d3dcbff38f702488f0fd5328
	I0318 12:44:43.847110    5712 command_runner.go:130] > CoreDNS-1.10.1
	I0318 12:44:43.847110    5712 command_runner.go:130] > linux/amd64, go1.20, 055b2c3
	I0318 12:44:43.847110    5712 command_runner.go:130] > [INFO] 127.0.0.1:53681 - 55845 "HINFO IN 162544917519141994.8165783507281513505. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.028223444s
	I0318 12:44:43.849133    5712 logs.go:123] Gathering logs for kindnet [5cf42651cb21] ...
	I0318 12:44:43.849212    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf42651cb21"
	I0318 12:44:43.882471    5712 command_runner.go:130] ! I0318 12:29:43.278241       1 main.go:227] handling current node
	I0318 12:44:43.883103    5712 command_runner.go:130] ! I0318 12:29:43.278258       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.883103    5712 command_runner.go:130] ! I0318 12:29:43.278267       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.883217    5712 command_runner.go:130] ! I0318 12:29:43.279034       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.883217    5712 command_runner.go:130] ! I0318 12:29:43.279112       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.883217    5712 command_runner.go:130] ! I0318 12:29:53.290788       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.883217    5712 command_runner.go:130] ! I0318 12:29:53.290919       1 main.go:227] handling current node
	I0318 12:44:43.883217    5712 command_runner.go:130] ! I0318 12:29:53.290935       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.883299    5712 command_runner.go:130] ! I0318 12:29:53.290944       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.883299    5712 command_runner.go:130] ! I0318 12:29:53.291443       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.883299    5712 command_runner.go:130] ! I0318 12:29:53.291608       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.883299    5712 command_runner.go:130] ! I0318 12:30:03.307097       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.883299    5712 command_runner.go:130] ! I0318 12:30:03.307405       1 main.go:227] handling current node
	I0318 12:44:43.883387    5712 command_runner.go:130] ! I0318 12:30:03.307624       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.883387    5712 command_runner.go:130] ! I0318 12:30:03.307713       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.883387    5712 command_runner.go:130] ! I0318 12:30:03.307989       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.883387    5712 command_runner.go:130] ! I0318 12:30:03.308095       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.883387    5712 command_runner.go:130] ! I0318 12:30:13.315412       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.883493    5712 command_runner.go:130] ! I0318 12:30:13.315512       1 main.go:227] handling current node
	I0318 12:44:43.883512    5712 command_runner.go:130] ! I0318 12:30:13.315528       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.883512    5712 command_runner.go:130] ! I0318 12:30:13.315537       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.883512    5712 command_runner.go:130] ! I0318 12:30:13.316187       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.883512    5712 command_runner.go:130] ! I0318 12:30:13.316277       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.883565    5712 command_runner.go:130] ! I0318 12:30:23.331223       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.883565    5712 command_runner.go:130] ! I0318 12:30:23.331328       1 main.go:227] handling current node
	I0318 12:44:43.883565    5712 command_runner.go:130] ! I0318 12:30:23.331344       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.883565    5712 command_runner.go:130] ! I0318 12:30:23.331352       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.883643    5712 command_runner.go:130] ! I0318 12:30:23.331895       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.883643    5712 command_runner.go:130] ! I0318 12:30:23.332071       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.883643    5712 command_runner.go:130] ! I0318 12:30:33.338821       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.883703    5712 command_runner.go:130] ! I0318 12:30:33.338848       1 main.go:227] handling current node
	I0318 12:44:43.883725    5712 command_runner.go:130] ! I0318 12:30:33.338860       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.883725    5712 command_runner.go:130] ! I0318 12:30:33.338866       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:30:33.339004       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:30:33.339017       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:30:43.354041       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:30:43.354126       1 main.go:227] handling current node
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:30:43.354142       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:30:43.354153       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:30:43.354280       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:30:43.354293       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:30:53.362056       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:30:53.362198       1 main.go:227] handling current node
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:30:53.362230       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:30:53.362239       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:30:53.362887       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:30:53.363194       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:31:03.378995       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:31:03.379039       1 main.go:227] handling current node
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:31:03.379096       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:31:03.379108       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:31:03.379432       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:31:03.379450       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:31:13.392082       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:31:13.392188       1 main.go:227] handling current node
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:31:13.392224       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:31:13.392249       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:31:13.392820       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:31:13.392974       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:31:23.402269       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:31:23.402391       1 main.go:227] handling current node
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:31:23.402408       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:31:23.402417       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:31:23.403188       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:31:23.403223       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.883751    5712 command_runner.go:130] ! I0318 12:31:33.413396       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.884368    5712 command_runner.go:130] ! I0318 12:31:33.413577       1 main.go:227] handling current node
	I0318 12:44:43.884415    5712 command_runner.go:130] ! I0318 12:31:33.413639       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.884415    5712 command_runner.go:130] ! I0318 12:31:33.413654       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.884639    5712 command_runner.go:130] ! I0318 12:31:33.414293       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.884700    5712 command_runner.go:130] ! I0318 12:31:33.414437       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.884700    5712 command_runner.go:130] ! I0318 12:31:43.424274       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.884700    5712 command_runner.go:130] ! I0318 12:31:43.424320       1 main.go:227] handling current node
	I0318 12:44:43.884700    5712 command_runner.go:130] ! I0318 12:31:43.424332       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.884700    5712 command_runner.go:130] ! I0318 12:31:43.424339       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.884767    5712 command_runner.go:130] ! I0318 12:31:43.424591       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:31:43.424608       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:31:53.433473       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:31:53.433591       1 main.go:227] handling current node
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:31:53.433607       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:31:53.433615       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:31:53.433851       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:31:53.433959       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:03.443363       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:03.443411       1 main.go:227] handling current node
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:03.443424       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:03.443450       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:03.444602       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:03.445390       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:13.460166       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:13.460215       1 main.go:227] handling current node
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:13.460229       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:13.460237       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:13.460679       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:13.460697       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:23.479958       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:23.480007       1 main.go:227] handling current node
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:23.480024       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:23.480032       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:23.480521       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:23.480578       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:33.491143       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:33.491190       1 main.go:227] handling current node
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:33.491204       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:33.491211       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:33.491340       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:33.491369       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:43.505355       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:43.505474       1 main.go:227] handling current node
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:43.505490       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:43.505498       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:43.505666       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:43.505696       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:53.513310       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:53.513340       1 main.go:227] handling current node
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:53.513350       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:53.513357       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.884787    5712 command_runner.go:130] ! I0318 12:32:53.513783       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:32:53.513865       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:03.527897       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:03.528343       1 main.go:227] handling current node
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:03.528485       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:03.528785       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:03.529110       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:03.529205       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:13.538048       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:13.538183       1 main.go:227] handling current node
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:13.538222       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:13.538317       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:13.538750       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:13.538888       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:23.555771       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:23.555820       1 main.go:227] handling current node
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:23.555895       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:23.555905       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:23.556511       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:23.556780       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:33.566023       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:33.566190       1 main.go:227] handling current node
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:33.566208       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:33.566217       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:33.566931       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:33.567031       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:43.581332       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:43.581382       1 main.go:227] handling current node
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:43.581449       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:43.581482       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:43.582063       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:43.582166       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:53.588426       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:53.588602       1 main.go:227] handling current node
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:53.588619       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:53.588628       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:53.588919       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:33:53.588937       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:03.604902       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:03.605007       1 main.go:227] handling current node
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:03.605023       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:03.605032       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:03.605612       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:03.605696       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:13.618369       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:13.618488       1 main.go:227] handling current node
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:13.618585       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:13.618604       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:13.618738       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:13.618747       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:23.626772       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:23.626887       1 main.go:227] handling current node
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:23.626903       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:23.626911       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:23.627415       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:23.627448       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:33.644122       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:33.644215       1 main.go:227] handling current node
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:33.644233       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:33.644757       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:33.645128       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:33.645240       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.885403    5712 command_runner.go:130] ! I0318 12:34:43.661684       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:34:43.661731       1 main.go:227] handling current node
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:34:43.661744       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:34:43.661751       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:34:43.662532       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:34:43.662645       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:34:53.676649       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:34:53.677242       1 main.go:227] handling current node
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:34:53.677518       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:34:53.677631       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:34:53.677873       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:34:53.677905       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:03.685328       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:03.685457       1 main.go:227] handling current node
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:03.685474       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:03.685483       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:03.685861       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:03.686001       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:13.702673       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:13.702782       1 main.go:227] handling current node
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:13.702801       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:13.703456       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:13.703827       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:13.703864       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:23.711167       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:23.711370       1 main.go:227] handling current node
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:23.711388       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:23.711398       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:23.712127       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:23.712222       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:33.724041       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:33.724810       1 main.go:227] handling current node
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:33.724973       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:33.725045       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:33.725458       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:33.725875       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:43.740216       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:43.740493       1 main.go:227] handling current node
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:43.740511       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:43.740520       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:43.741453       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:43.741584       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:53.748632       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:53.749163       1 main.go:227] handling current node
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:53.749285       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:53.749498       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:53.749815       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:35:53.749904       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:03.765208       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:03.765326       1 main.go:227] handling current node
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:03.765343       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:03.765351       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:03.765883       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:03.766028       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:13.775221       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:13.775396       1 main.go:227] handling current node
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:13.775430       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:13.775502       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:13.776058       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:13.776177       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:23.790073       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:23.790179       1 main.go:227] handling current node
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:23.790195       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:23.790207       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:23.790761       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:23.790798       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:33.800116       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:33.800240       1 main.go:227] handling current node
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:33.800256       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:33.800265       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.886450    5712 command_runner.go:130] ! I0318 12:36:33.800837       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:36:33.800858       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:36:43.817961       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:36:43.818115       1 main.go:227] handling current node
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:36:43.818132       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:36:43.818146       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:36:43.818537       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:36:43.818661       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:36:53.827340       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:36:53.827385       1 main.go:227] handling current node
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:36:53.827398       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:36:53.827406       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:36:53.827787       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:36:53.827885       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:03.840761       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:03.840837       1 main.go:227] handling current node
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:03.840851       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:03.840859       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:03.841285       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:03.841319       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:13.848127       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:13.848174       1 main.go:227] handling current node
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:13.848188       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:13.848195       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:13.848630       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:13.848646       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:23.863745       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:23.863916       1 main.go:227] handling current node
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:23.863950       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:23.863996       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:23.864419       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:23.864510       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:33.876214       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:33.876331       1 main.go:227] handling current node
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:33.876347       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:33.876355       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:33.877021       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:33.877100       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:43.886399       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:43.886544       1 main.go:227] handling current node
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:43.886626       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:43.886636       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:43.886872       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:43.886890       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:53.903761       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:53.903845       1 main.go:227] handling current node
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:53.903871       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:53.903880       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:53.905033       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:37:53.905181       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:03.919532       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:03.919783       1 main.go:227] handling current node
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:03.919840       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:03.919894       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:03.920221       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:03.920390       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:13.927894       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:13.928004       1 main.go:227] handling current node
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:13.928022       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:13.928031       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:13.928232       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:13.928269       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:23.943692       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:23.943780       1 main.go:227] handling current node
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:23.943795       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:23.943804       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:23.944523       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:23.944596       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:33.952000       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:33.952098       1 main.go:227] handling current node
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:33.952114       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:33.952123       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:33.952466       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:33.952503       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:43.965979       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:43.966101       1 main.go:227] handling current node
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:43.966117       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:43.966125       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:53.989210       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:53.989308       1 main.go:227] handling current node
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:53.989322       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:53.989373       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:53.989864       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:53.989957       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:38:53.990028       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.25.157.200 Flags: [] Table: 0} 
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:39:03.996429       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:39:03.996598       1 main.go:227] handling current node
	I0318 12:44:43.887412    5712 command_runner.go:130] ! I0318 12:39:03.996614       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:03.996623       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:03.996739       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:03.996753       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:14.008318       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:14.008384       1 main.go:227] handling current node
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:14.008398       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:14.008405       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:14.009080       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:14.009179       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:24.016154       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:24.016315       1 main.go:227] handling current node
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:24.016330       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:24.016338       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:24.016842       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:24.016875       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:34.029061       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:34.029159       1 main.go:227] handling current node
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:34.029175       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:34.029184       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:34.030103       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:34.030216       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:44.037921       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:44.037960       1 main.go:227] handling current node
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:44.037972       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:44.037981       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:44.038243       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:44.038318       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:54.057786       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:54.058021       1 main.go:227] handling current node
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:54.058100       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:54.058189       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:54.058376       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:39:54.058478       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:04.067119       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:04.067262       1 main.go:227] handling current node
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:04.067280       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:04.067289       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:04.067742       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:04.067846       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:14.082426       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:14.082921       1 main.go:227] handling current node
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:14.082946       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:14.082956       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:14.083174       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:14.083247       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:24.098060       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:24.098161       1 main.go:227] handling current node
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:24.098178       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:24.098187       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:24.098316       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:24.098324       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:34.335103       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:34.335169       1 main.go:227] handling current node
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:34.335185       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:34.335192       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:34.335470       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:34.335488       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:44.342962       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:44.343122       1 main.go:227] handling current node
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:44.343139       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:44.343148       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:44.343738       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:43.888419    5712 command_runner.go:130] ! I0318 12:40:44.343780       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:43.908429    5712 logs.go:123] Gathering logs for container status ...
	I0318 12:44:43.908429    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 12:44:44.016071    5712 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0318 12:44:44.016071    5712 command_runner.go:130] > 566e40ce923f7       8c811b4aec35f                                                                                         8 seconds ago        Running             busybox                   1                   e1b2432b0ed66       busybox-5b5d89c9d6-48qkw
	I0318 12:44:44.016071    5712 command_runner.go:130] > fcf17db92b351       ead0a4a53df89                                                                                         9 seconds ago        Running             coredns                   1                   1090dd5740980       coredns-5dd5756b68-fgn7v
	I0318 12:44:44.016071    5712 command_runner.go:130] > 4652c26c0904e       6e38f40d628db                                                                                         27 seconds ago       Running             storage-provisioner       2                   889c16eb0ab73       storage-provisioner
	I0318 12:44:44.016071    5712 command_runner.go:130] > 9fec05a61d2a9       4950bb10b3f87                                                                                         About a minute ago   Running             kindnet-cni               1                   5ecbdcbdad3fa       kindnet-kpt4f
	I0318 12:44:44.016071    5712 command_runner.go:130] > 787ade2ea2cd0       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   889c16eb0ab73       storage-provisioner
	I0318 12:44:44.016071    5712 command_runner.go:130] > 575b41a3a85a4       83f6cc407eed8                                                                                         About a minute ago   Running             kube-proxy                1                   7a2f0ccaf5c4c       kube-proxy-4dg79
	I0318 12:44:44.016071    5712 command_runner.go:130] > a48a6d310b868       7fe0e6f37db33                                                                                         About a minute ago   Running             kube-apiserver            0                   a7281d6e698ea       kube-apiserver-multinode-642600
	I0318 12:44:44.016071    5712 command_runner.go:130] > 14ae9398d33b1       d058aa5ab969c                                                                                         About a minute ago   Running             kube-controller-manager   1                   eca6768355c74       kube-controller-manager-multinode-642600
	I0318 12:44:44.016071    5712 command_runner.go:130] > bd1e4f4d262e3       e3db313c6dbc0                                                                                         About a minute ago   Running             kube-scheduler            1                   f62197122538f       kube-scheduler-multinode-642600
	I0318 12:44:44.016071    5712 command_runner.go:130] > 8e7911b58c587       73deb9a3f7025                                                                                         About a minute ago   Running             etcd                      0                   67004ee038ee4       etcd-multinode-642600
	I0318 12:44:44.016071    5712 command_runner.go:130] > a8dd2eacb7251       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   21 minutes ago       Exited              busybox                   0                   29bb4d534c2e2       busybox-5b5d89c9d6-48qkw
	I0318 12:44:44.016071    5712 command_runner.go:130] > e81f1d2fdb360       ead0a4a53df89                                                                                         25 minutes ago       Exited              coredns                   0                   ed38da653fbef       coredns-5dd5756b68-fgn7v
	I0318 12:44:44.016071    5712 command_runner.go:130] > 5cf42651cb21d       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              25 minutes ago       Exited              kindnet-cni               0                   fef37141be6db       kindnet-kpt4f
	I0318 12:44:44.016071    5712 command_runner.go:130] > 4bbad08fe59ac       83f6cc407eed8                                                                                         25 minutes ago       Exited              kube-proxy                0                   2f4709a3a45a4       kube-proxy-4dg79
	I0318 12:44:44.016071    5712 command_runner.go:130] > a54be44369019       d058aa5ab969c                                                                                         26 minutes ago       Exited              kube-controller-manager   0                   d766c4514f0bf       kube-controller-manager-multinode-642600
	I0318 12:44:44.016607    5712 command_runner.go:130] > 47777d4c0b90d       e3db313c6dbc0                                                                                         26 minutes ago       Exited              kube-scheduler            0                   3500a9f1ca84e       kube-scheduler-multinode-642600
	I0318 12:44:44.018691    5712 logs.go:123] Gathering logs for coredns [e81f1d2fdb36] ...
	I0318 12:44:44.018691    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e81f1d2fdb36"
	I0318 12:44:44.054690    5712 command_runner.go:130] > .:53
	I0318 12:44:44.055567    5712 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 07d6393480c36cc6b464d3853a5e32028517fcba50e93adef34ce624ca099b3a1e269a86e99bf5086a15610de9e11b2980c233f8d3dcbff38f702488f0fd5328
	I0318 12:44:44.055567    5712 command_runner.go:130] > CoreDNS-1.10.1
	I0318 12:44:44.055567    5712 command_runner.go:130] > linux/amd64, go1.20, 055b2c3
	I0318 12:44:44.055697    5712 command_runner.go:130] > [INFO] 127.0.0.1:48183 - 41539 "HINFO IN 767578685007701398.8900982300391989616. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.040167772s
	I0318 12:44:44.055761    5712 command_runner.go:130] > [INFO] 10.244.0.3:56190 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000320901s
	I0318 12:44:44.055820    5712 command_runner.go:130] > [INFO] 10.244.0.3:43050 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.04023503s
	I0318 12:44:44.055820    5712 command_runner.go:130] > [INFO] 10.244.0.3:47302 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.158419612s
	I0318 12:44:44.055820    5712 command_runner.go:130] > [INFO] 10.244.0.3:37199 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.162590352s
	I0318 12:44:44.055820    5712 command_runner.go:130] > [INFO] 10.244.1.2:48003 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000216101s
	I0318 12:44:44.055868    5712 command_runner.go:130] > [INFO] 10.244.1.2:48857 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000380201s
	I0318 12:44:44.055868    5712 command_runner.go:130] > [INFO] 10.244.1.2:52412 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000070401s
	I0318 12:44:44.055909    5712 command_runner.go:130] > [INFO] 10.244.1.2:59362 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000071801s
	I0318 12:44:44.055909    5712 command_runner.go:130] > [INFO] 10.244.0.3:38833 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000250501s
	I0318 12:44:44.055961    5712 command_runner.go:130] > [INFO] 10.244.0.3:34860 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.064163607s
	I0318 12:44:44.056142    5712 command_runner.go:130] > [INFO] 10.244.0.3:45210 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000227601s
	I0318 12:44:44.056189    5712 command_runner.go:130] > [INFO] 10.244.0.3:32804 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001229s
	I0318 12:44:44.056189    5712 command_runner.go:130] > [INFO] 10.244.0.3:44904 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.01563145s
	I0318 12:44:44.056189    5712 command_runner.go:130] > [INFO] 10.244.0.3:34958 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002035s
	I0318 12:44:44.056189    5712 command_runner.go:130] > [INFO] 10.244.0.3:59094 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001507s
	I0318 12:44:44.056256    5712 command_runner.go:130] > [INFO] 10.244.0.3:39370 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000181001s
	I0318 12:44:44.056256    5712 command_runner.go:130] > [INFO] 10.244.1.2:40318 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000302101s
	I0318 12:44:44.056300    5712 command_runner.go:130] > [INFO] 10.244.1.2:43523 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001489s
	I0318 12:44:44.056300    5712 command_runner.go:130] > [INFO] 10.244.1.2:47882 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001346s
	I0318 12:44:44.056300    5712 command_runner.go:130] > [INFO] 10.244.1.2:38222 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000057401s
	I0318 12:44:44.056300    5712 command_runner.go:130] > [INFO] 10.244.1.2:49068 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001253s
	I0318 12:44:44.056300    5712 command_runner.go:130] > [INFO] 10.244.1.2:35375 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000582s
	I0318 12:44:44.056390    5712 command_runner.go:130] > [INFO] 10.244.1.2:40933 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000179201s
	I0318 12:44:44.056390    5712 command_runner.go:130] > [INFO] 10.244.1.2:36014 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0002051s
	I0318 12:44:44.056390    5712 command_runner.go:130] > [INFO] 10.244.0.3:37733 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000265401s
	I0318 12:44:44.056390    5712 command_runner.go:130] > [INFO] 10.244.0.3:52912 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148001s
	I0318 12:44:44.056390    5712 command_runner.go:130] > [INFO] 10.244.0.3:33147 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000143701s
	I0318 12:44:44.056452    5712 command_runner.go:130] > [INFO] 10.244.0.3:49893 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000536s
	I0318 12:44:44.056452    5712 command_runner.go:130] > [INFO] 10.244.1.2:42681 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001221s
	I0318 12:44:44.056452    5712 command_runner.go:130] > [INFO] 10.244.1.2:41416 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143s
	I0318 12:44:44.056452    5712 command_runner.go:130] > [INFO] 10.244.1.2:58254 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000241501s
	I0318 12:44:44.056452    5712 command_runner.go:130] > [INFO] 10.244.1.2:35844 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000197201s
	I0318 12:44:44.056452    5712 command_runner.go:130] > [INFO] 10.244.0.3:33559 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102201s
	I0318 12:44:44.056452    5712 command_runner.go:130] > [INFO] 10.244.0.3:53963 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000158701s
	I0318 12:44:44.056452    5712 command_runner.go:130] > [INFO] 10.244.0.3:41406 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001297s
	I0318 12:44:44.056452    5712 command_runner.go:130] > [INFO] 10.244.0.3:34685 - 5 "PTR IN 1.144.25.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000264001s
	I0318 12:44:44.056452    5712 command_runner.go:130] > [INFO] 10.244.1.2:43312 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001178s
	I0318 12:44:44.056452    5712 command_runner.go:130] > [INFO] 10.244.1.2:55281 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000235501s
	I0318 12:44:44.056452    5712 command_runner.go:130] > [INFO] 10.244.1.2:34710 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000874s
	I0318 12:44:44.056452    5712 command_runner.go:130] > [INFO] 10.244.1.2:57686 - 5 "PTR IN 1.144.25.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000557s
	I0318 12:44:44.056452    5712 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0318 12:44:44.056452    5712 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0318 12:44:44.059249    5712 logs.go:123] Gathering logs for kube-scheduler [bd1e4f4d262e] ...
	I0318 12:44:44.059249    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd1e4f4d262e"
	I0318 12:44:44.096040    5712 command_runner.go:130] ! I0318 12:43:27.649061       1 serving.go:348] Generated self-signed cert in-memory
	I0318 12:44:44.097030    5712 command_runner.go:130] ! W0318 12:43:30.548831       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0318 12:44:44.097030    5712 command_runner.go:130] ! W0318 12:43:30.549092       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 12:44:44.097030    5712 command_runner.go:130] ! W0318 12:43:30.549282       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0318 12:44:44.097030    5712 command_runner.go:130] ! W0318 12:43:30.549461       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 12:44:44.097030    5712 command_runner.go:130] ! I0318 12:43:30.613305       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0318 12:44:44.097030    5712 command_runner.go:130] ! I0318 12:43:30.613417       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:44.097030    5712 command_runner.go:130] ! I0318 12:43:30.618512       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 12:44:44.097030    5712 command_runner.go:130] ! I0318 12:43:30.619171       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0318 12:44:44.097030    5712 command_runner.go:130] ! I0318 12:43:30.619276       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 12:44:44.097030    5712 command_runner.go:130] ! I0318 12:43:30.620071       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 12:44:44.097030    5712 command_runner.go:130] ! I0318 12:43:30.720411       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 12:44:44.099046    5712 logs.go:123] Gathering logs for kube-proxy [575b41a3a85a] ...
	I0318 12:44:44.099046    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 575b41a3a85a"
	I0318 12:44:44.129043    5712 command_runner.go:130] ! I0318 12:43:33.336778       1 server_others.go:69] "Using iptables proxy"
	I0318 12:44:44.129043    5712 command_runner.go:130] ! I0318 12:43:33.550433       1 node.go:141] Successfully retrieved node IP: 172.25.148.129
	I0318 12:44:44.129043    5712 command_runner.go:130] ! I0318 12:43:33.793084       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 12:44:44.129043    5712 command_runner.go:130] ! I0318 12:43:33.793109       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 12:44:44.129043    5712 command_runner.go:130] ! I0318 12:43:33.796954       1 server_others.go:152] "Using iptables Proxier"
	I0318 12:44:44.129043    5712 command_runner.go:130] ! I0318 12:43:33.798936       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 12:44:44.129043    5712 command_runner.go:130] ! I0318 12:43:33.800347       1 server.go:846] "Version info" version="v1.28.4"
	I0318 12:44:44.129043    5712 command_runner.go:130] ! I0318 12:43:33.800569       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:44.129043    5712 command_runner.go:130] ! I0318 12:43:33.803648       1 config.go:188] "Starting service config controller"
	I0318 12:44:44.129043    5712 command_runner.go:130] ! I0318 12:43:33.805156       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 12:44:44.129043    5712 command_runner.go:130] ! I0318 12:43:33.805421       1 config.go:97] "Starting endpoint slice config controller"
	I0318 12:44:44.129043    5712 command_runner.go:130] ! I0318 12:43:33.805584       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 12:44:44.129043    5712 command_runner.go:130] ! I0318 12:43:33.808628       1 config.go:315] "Starting node config controller"
	I0318 12:44:44.129043    5712 command_runner.go:130] ! I0318 12:43:33.808736       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 12:44:44.129043    5712 command_runner.go:130] ! I0318 12:43:33.905580       1 shared_informer.go:318] Caches are synced for service config
	I0318 12:44:44.129043    5712 command_runner.go:130] ! I0318 12:43:33.907041       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 12:44:44.129043    5712 command_runner.go:130] ! I0318 12:43:33.909416       1 shared_informer.go:318] Caches are synced for node config
	I0318 12:44:44.131031    5712 logs.go:123] Gathering logs for kindnet [9fec05a61d2a] ...
	I0318 12:44:44.131031    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fec05a61d2a"
	I0318 12:44:44.166540    5712 command_runner.go:130] ! I0318 12:43:33.429181       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0318 12:44:44.166540    5712 command_runner.go:130] ! I0318 12:43:33.431032       1 main.go:107] hostIP = 172.25.148.129
	I0318 12:44:44.166540    5712 command_runner.go:130] ! podIP = 172.25.148.129
	I0318 12:44:44.166540    5712 command_runner.go:130] ! I0318 12:43:33.432708       1 main.go:116] setting mtu 1500 for CNI 
	I0318 12:44:44.166540    5712 command_runner.go:130] ! I0318 12:43:33.432750       1 main.go:146] kindnetd IP family: "ipv4"
	I0318 12:44:44.166540    5712 command_runner.go:130] ! I0318 12:43:33.432773       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0318 12:44:44.166540    5712 command_runner.go:130] ! I0318 12:44:03.855331       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0318 12:44:44.166540    5712 command_runner.go:130] ! I0318 12:44:03.906638       1 main.go:223] Handling node with IPs: map[172.25.148.129:{}]
	I0318 12:44:44.166540    5712 command_runner.go:130] ! I0318 12:44:03.906763       1 main.go:227] handling current node
	I0318 12:44:44.166540    5712 command_runner.go:130] ! I0318 12:44:03.907280       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:44.166540    5712 command_runner.go:130] ! I0318 12:44:03.907371       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:44.166540    5712 command_runner.go:130] ! I0318 12:44:03.907763       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.25.159.102 Flags: [] Table: 0} 
	I0318 12:44:44.166540    5712 command_runner.go:130] ! I0318 12:44:03.907983       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:44.166540    5712 command_runner.go:130] ! I0318 12:44:03.907999       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:44.166540    5712 command_runner.go:130] ! I0318 12:44:03.908063       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.25.157.200 Flags: [] Table: 0} 
	I0318 12:44:44.166540    5712 command_runner.go:130] ! I0318 12:44:13.926166       1 main.go:223] Handling node with IPs: map[172.25.148.129:{}]
	I0318 12:44:44.166540    5712 command_runner.go:130] ! I0318 12:44:13.926260       1 main.go:227] handling current node
	I0318 12:44:44.166540    5712 command_runner.go:130] ! I0318 12:44:13.926281       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:44.166540    5712 command_runner.go:130] ! I0318 12:44:13.926377       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:44.166540    5712 command_runner.go:130] ! I0318 12:44:13.927231       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:44.167228    5712 command_runner.go:130] ! I0318 12:44:13.927364       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:44.167228    5712 command_runner.go:130] ! I0318 12:44:23.943396       1 main.go:223] Handling node with IPs: map[172.25.148.129:{}]
	I0318 12:44:44.167228    5712 command_runner.go:130] ! I0318 12:44:23.943437       1 main.go:227] handling current node
	I0318 12:44:44.167228    5712 command_runner.go:130] ! I0318 12:44:23.943450       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:44.167346    5712 command_runner.go:130] ! I0318 12:44:23.943456       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:44.167346    5712 command_runner.go:130] ! I0318 12:44:23.943816       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:44.167346    5712 command_runner.go:130] ! I0318 12:44:23.943956       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:44.167346    5712 command_runner.go:130] ! I0318 12:44:33.951114       1 main.go:223] Handling node with IPs: map[172.25.148.129:{}]
	I0318 12:44:44.167414    5712 command_runner.go:130] ! I0318 12:44:33.951215       1 main.go:227] handling current node
	I0318 12:44:44.167442    5712 command_runner.go:130] ! I0318 12:44:33.951232       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:44.167499    5712 command_runner.go:130] ! I0318 12:44:33.951241       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:44.167499    5712 command_runner.go:130] ! I0318 12:44:33.951807       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:44.167499    5712 command_runner.go:130] ! I0318 12:44:33.951927       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:44.167499    5712 command_runner.go:130] ! I0318 12:44:43.968530       1 main.go:223] Handling node with IPs: map[172.25.148.129:{}]
	I0318 12:44:44.167499    5712 command_runner.go:130] ! I0318 12:44:43.968658       1 main.go:227] handling current node
	I0318 12:44:44.167499    5712 command_runner.go:130] ! I0318 12:44:43.968737       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:44.167499    5712 command_runner.go:130] ! I0318 12:44:43.968990       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:44.167499    5712 command_runner.go:130] ! I0318 12:44:43.969485       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:44.167499    5712 command_runner.go:130] ! I0318 12:44:43.969715       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:46.680472    5712 api_server.go:253] Checking apiserver healthz at https://172.25.148.129:8443/healthz ...
	I0318 12:44:46.689900    5712 api_server.go:279] https://172.25.148.129:8443/healthz returned 200:
	ok
	I0318 12:44:46.690852    5712 round_trippers.go:463] GET https://172.25.148.129:8443/version
	I0318 12:44:46.690868    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:46.690907    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:46.690907    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:46.692243    5712 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0318 12:44:46.692659    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:46.692659    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:46.692659    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:46.692659    5712 round_trippers.go:580]     Content-Length: 264
	I0318 12:44:46.692659    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:46 GMT
	I0318 12:44:46.692659    5712 round_trippers.go:580]     Audit-Id: d3ce1c29-ea17-462e-848b-e39441cce8c7
	I0318 12:44:46.692659    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:46.692659    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:46.692659    5712 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0318 12:44:46.692813    5712 api_server.go:141] control plane version: v1.28.4
	I0318 12:44:46.692813    5712 api_server.go:131] duration metric: took 3.8780055s to wait for apiserver health ...
	I0318 12:44:46.692813    5712 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 12:44:46.704826    5712 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 12:44:46.740735    5712 command_runner.go:130] > a48a6d310b86
	I0318 12:44:46.740800    5712 logs.go:276] 1 containers: [a48a6d310b86]
	I0318 12:44:46.751792    5712 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 12:44:46.779857    5712 command_runner.go:130] > 8e7911b58c58
	I0318 12:44:46.780083    5712 logs.go:276] 1 containers: [8e7911b58c58]
	I0318 12:44:46.791203    5712 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 12:44:46.818283    5712 command_runner.go:130] > fcf17db92b35
	I0318 12:44:46.819045    5712 command_runner.go:130] > e81f1d2fdb36
	I0318 12:44:46.820240    5712 logs.go:276] 2 containers: [fcf17db92b35 e81f1d2fdb36]
	I0318 12:44:46.830151    5712 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 12:44:46.873443    5712 command_runner.go:130] > bd1e4f4d262e
	I0318 12:44:46.874114    5712 command_runner.go:130] > 47777d4c0b90
	I0318 12:44:46.874396    5712 logs.go:276] 2 containers: [bd1e4f4d262e 47777d4c0b90]
	I0318 12:44:46.885002    5712 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 12:44:46.915972    5712 command_runner.go:130] > 575b41a3a85a
	I0318 12:44:46.916070    5712 command_runner.go:130] > 4bbad08fe59a
	I0318 12:44:46.916070    5712 logs.go:276] 2 containers: [575b41a3a85a 4bbad08fe59a]
	I0318 12:44:46.926910    5712 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 12:44:46.959730    5712 command_runner.go:130] > 14ae9398d33b
	I0318 12:44:46.959730    5712 command_runner.go:130] > a54be4436901
	I0318 12:44:46.959839    5712 logs.go:276] 2 containers: [14ae9398d33b a54be4436901]
	I0318 12:44:46.970281    5712 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 12:44:46.999590    5712 command_runner.go:130] > 9fec05a61d2a
	I0318 12:44:47.000570    5712 command_runner.go:130] > 5cf42651cb21
	I0318 12:44:47.000756    5712 logs.go:276] 2 containers: [9fec05a61d2a 5cf42651cb21]
	I0318 12:44:47.000809    5712 logs.go:123] Gathering logs for kindnet [5cf42651cb21] ...
	I0318 12:44:47.000870    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf42651cb21"
	I0318 12:44:47.033864    5712 command_runner.go:130] ! I0318 12:29:43.278241       1 main.go:227] handling current node
	I0318 12:44:47.033864    5712 command_runner.go:130] ! I0318 12:29:43.278258       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.033864    5712 command_runner.go:130] ! I0318 12:29:43.278267       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.033864    5712 command_runner.go:130] ! I0318 12:29:43.279034       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.033864    5712 command_runner.go:130] ! I0318 12:29:43.279112       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.034331    5712 command_runner.go:130] ! I0318 12:29:53.290788       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.034500    5712 command_runner.go:130] ! I0318 12:29:53.290919       1 main.go:227] handling current node
	I0318 12:44:47.034578    5712 command_runner.go:130] ! I0318 12:29:53.290935       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.034719    5712 command_runner.go:130] ! I0318 12:29:53.290944       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.035257    5712 command_runner.go:130] ! I0318 12:29:53.291443       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:29:53.291608       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:03.307097       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:03.307405       1 main.go:227] handling current node
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:03.307624       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:03.307713       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:03.307989       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:03.308095       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:13.315412       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:13.315512       1 main.go:227] handling current node
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:13.315528       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:13.315537       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:13.316187       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:13.316277       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:23.331223       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:23.331328       1 main.go:227] handling current node
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:23.331344       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:23.331352       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:23.331895       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:23.332071       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:33.338821       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:33.338848       1 main.go:227] handling current node
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:33.338860       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:33.338866       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:33.339004       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:33.339017       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:43.354041       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.042981    5712 command_runner.go:130] ! I0318 12:30:43.354126       1 main.go:227] handling current node
	I0318 12:44:47.043545    5712 command_runner.go:130] ! I0318 12:30:43.354142       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.043545    5712 command_runner.go:130] ! I0318 12:30:43.354153       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.043545    5712 command_runner.go:130] ! I0318 12:30:43.354280       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.043545    5712 command_runner.go:130] ! I0318 12:30:43.354293       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.043545    5712 command_runner.go:130] ! I0318 12:30:53.362056       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.043545    5712 command_runner.go:130] ! I0318 12:30:53.362198       1 main.go:227] handling current node
	I0318 12:44:47.043616    5712 command_runner.go:130] ! I0318 12:30:53.362230       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.043616    5712 command_runner.go:130] ! I0318 12:30:53.362239       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.043616    5712 command_runner.go:130] ! I0318 12:30:53.362887       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.043616    5712 command_runner.go:130] ! I0318 12:30:53.363194       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.043616    5712 command_runner.go:130] ! I0318 12:31:03.378995       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.043692    5712 command_runner.go:130] ! I0318 12:31:03.379039       1 main.go:227] handling current node
	I0318 12:44:47.043692    5712 command_runner.go:130] ! I0318 12:31:03.379096       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.043752    5712 command_runner.go:130] ! I0318 12:31:03.379108       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.043826    5712 command_runner.go:130] ! I0318 12:31:03.379432       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:03.379450       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:13.392082       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:13.392188       1 main.go:227] handling current node
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:13.392224       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:13.392249       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:13.392820       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:13.392974       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:23.402269       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:23.402391       1 main.go:227] handling current node
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:23.402408       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:23.402417       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:23.403188       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:23.403223       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:33.413396       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:33.413577       1 main.go:227] handling current node
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:33.413639       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:33.413654       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:33.414293       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:33.414437       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:43.424274       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:43.424320       1 main.go:227] handling current node
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:43.424332       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:43.424339       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:43.424591       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:43.424608       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:53.433473       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:53.433591       1 main.go:227] handling current node
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:53.433607       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.043886    5712 command_runner.go:130] ! I0318 12:31:53.433615       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.044489    5712 command_runner.go:130] ! I0318 12:31:53.433851       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.044489    5712 command_runner.go:130] ! I0318 12:31:53.433959       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.044489    5712 command_runner.go:130] ! I0318 12:32:03.443363       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.044489    5712 command_runner.go:130] ! I0318 12:32:03.443411       1 main.go:227] handling current node
	I0318 12:44:47.044489    5712 command_runner.go:130] ! I0318 12:32:03.443424       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.044586    5712 command_runner.go:130] ! I0318 12:32:03.443450       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.044586    5712 command_runner.go:130] ! I0318 12:32:03.444602       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.044586    5712 command_runner.go:130] ! I0318 12:32:03.445390       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.044586    5712 command_runner.go:130] ! I0318 12:32:13.460166       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.044586    5712 command_runner.go:130] ! I0318 12:32:13.460215       1 main.go:227] handling current node
	I0318 12:44:47.044586    5712 command_runner.go:130] ! I0318 12:32:13.460229       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.044586    5712 command_runner.go:130] ! I0318 12:32:13.460237       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.044687    5712 command_runner.go:130] ! I0318 12:32:13.460679       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.044743    5712 command_runner.go:130] ! I0318 12:32:13.460697       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.044743    5712 command_runner.go:130] ! I0318 12:32:23.479958       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.044743    5712 command_runner.go:130] ! I0318 12:32:23.480007       1 main.go:227] handling current node
	I0318 12:44:47.044743    5712 command_runner.go:130] ! I0318 12:32:23.480024       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.044743    5712 command_runner.go:130] ! I0318 12:32:23.480032       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.044811    5712 command_runner.go:130] ! I0318 12:32:23.480521       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.044811    5712 command_runner.go:130] ! I0318 12:32:23.480578       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.044811    5712 command_runner.go:130] ! I0318 12:32:33.491143       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.044866    5712 command_runner.go:130] ! I0318 12:32:33.491190       1 main.go:227] handling current node
	I0318 12:44:47.044866    5712 command_runner.go:130] ! I0318 12:32:33.491204       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.044866    5712 command_runner.go:130] ! I0318 12:32:33.491211       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.044866    5712 command_runner.go:130] ! I0318 12:32:33.491340       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.044928    5712 command_runner.go:130] ! I0318 12:32:33.491369       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.044928    5712 command_runner.go:130] ! I0318 12:32:43.505355       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.044928    5712 command_runner.go:130] ! I0318 12:32:43.505474       1 main.go:227] handling current node
	I0318 12:44:47.044982    5712 command_runner.go:130] ! I0318 12:32:43.505490       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.044982    5712 command_runner.go:130] ! I0318 12:32:43.505498       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.044982    5712 command_runner.go:130] ! I0318 12:32:43.505666       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.044982    5712 command_runner.go:130] ! I0318 12:32:43.505696       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.045038    5712 command_runner.go:130] ! I0318 12:32:53.513310       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.045038    5712 command_runner.go:130] ! I0318 12:32:53.513340       1 main.go:227] handling current node
	I0318 12:44:47.045038    5712 command_runner.go:130] ! I0318 12:32:53.513350       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.045038    5712 command_runner.go:130] ! I0318 12:32:53.513357       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.045038    5712 command_runner.go:130] ! I0318 12:32:53.513783       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.045118    5712 command_runner.go:130] ! I0318 12:32:53.513865       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.045118    5712 command_runner.go:130] ! I0318 12:33:03.527897       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.045118    5712 command_runner.go:130] ! I0318 12:33:03.528343       1 main.go:227] handling current node
	I0318 12:44:47.045176    5712 command_runner.go:130] ! I0318 12:33:03.528485       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.045176    5712 command_runner.go:130] ! I0318 12:33:03.528785       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.045176    5712 command_runner.go:130] ! I0318 12:33:03.529110       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.045232    5712 command_runner.go:130] ! I0318 12:33:03.529205       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.045232    5712 command_runner.go:130] ! I0318 12:33:13.538048       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.045232    5712 command_runner.go:130] ! I0318 12:33:13.538183       1 main.go:227] handling current node
	I0318 12:44:47.045232    5712 command_runner.go:130] ! I0318 12:33:13.538222       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.045232    5712 command_runner.go:130] ! I0318 12:33:13.538317       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.045232    5712 command_runner.go:130] ! I0318 12:33:13.538750       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.045289    5712 command_runner.go:130] ! I0318 12:33:13.538888       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.045289    5712 command_runner.go:130] ! I0318 12:33:23.555771       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.045289    5712 command_runner.go:130] ! I0318 12:33:23.555820       1 main.go:227] handling current node
	I0318 12:44:47.045289    5712 command_runner.go:130] ! I0318 12:33:23.555895       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.045289    5712 command_runner.go:130] ! I0318 12:33:23.555905       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.045289    5712 command_runner.go:130] ! I0318 12:33:23.556511       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.045515    5712 command_runner.go:130] ! I0318 12:33:23.556780       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.045515    5712 command_runner.go:130] ! I0318 12:33:33.566023       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.045574    5712 command_runner.go:130] ! I0318 12:33:33.566190       1 main.go:227] handling current node
	I0318 12:44:47.045574    5712 command_runner.go:130] ! I0318 12:33:33.566208       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.045574    5712 command_runner.go:130] ! I0318 12:33:33.566217       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.045574    5712 command_runner.go:130] ! I0318 12:33:33.566931       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.045631    5712 command_runner.go:130] ! I0318 12:33:33.567031       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.045631    5712 command_runner.go:130] ! I0318 12:33:43.581332       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.045631    5712 command_runner.go:130] ! I0318 12:33:43.581382       1 main.go:227] handling current node
	I0318 12:44:47.045689    5712 command_runner.go:130] ! I0318 12:33:43.581449       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.045689    5712 command_runner.go:130] ! I0318 12:33:43.581482       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.045689    5712 command_runner.go:130] ! I0318 12:33:43.582063       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:33:43.582166       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:33:53.588426       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:33:53.588602       1 main.go:227] handling current node
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:33:53.588619       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:33:53.588628       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:33:53.588919       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:33:53.588937       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:03.604902       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:03.605007       1 main.go:227] handling current node
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:03.605023       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:03.605032       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:03.605612       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:03.605696       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:13.618369       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:13.618488       1 main.go:227] handling current node
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:13.618585       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:13.618604       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:13.618738       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:13.618747       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:23.626772       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:23.626887       1 main.go:227] handling current node
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:23.626903       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:23.626911       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:23.627415       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:23.627448       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:33.644122       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:33.644215       1 main.go:227] handling current node
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:33.644233       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:33.644757       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:33.645128       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:33.645240       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:43.661684       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.045746    5712 command_runner.go:130] ! I0318 12:34:43.661731       1 main.go:227] handling current node
	I0318 12:44:47.046297    5712 command_runner.go:130] ! I0318 12:34:43.661744       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.046648    5712 command_runner.go:130] ! I0318 12:34:43.661751       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.046648    5712 command_runner.go:130] ! I0318 12:34:43.662532       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.046648    5712 command_runner.go:130] ! I0318 12:34:43.662645       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.046957    5712 command_runner.go:130] ! I0318 12:34:53.676649       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:34:53.677242       1 main.go:227] handling current node
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:34:53.677518       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:34:53.677631       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:34:53.677873       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:34:53.677905       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:03.685328       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:03.685457       1 main.go:227] handling current node
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:03.685474       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:03.685483       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:03.685861       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:03.686001       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:13.702673       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:13.702782       1 main.go:227] handling current node
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:13.702801       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:13.703456       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:13.703827       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:13.703864       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:23.711167       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:23.711370       1 main.go:227] handling current node
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:23.711388       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:23.711398       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:23.712127       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:23.712222       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:33.724041       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:33.724810       1 main.go:227] handling current node
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:33.724973       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:33.725045       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:33.725458       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:33.725875       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:43.740216       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:43.740493       1 main.go:227] handling current node
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:43.740511       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:43.740520       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:43.741453       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:43.741584       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:53.748632       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:53.749163       1 main.go:227] handling current node
	I0318 12:44:47.047022    5712 command_runner.go:130] ! I0318 12:35:53.749285       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.047566    5712 command_runner.go:130] ! I0318 12:35:53.749498       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.047566    5712 command_runner.go:130] ! I0318 12:35:53.749815       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.047629    5712 command_runner.go:130] ! I0318 12:35:53.749904       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.047629    5712 command_runner.go:130] ! I0318 12:36:03.765208       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.047629    5712 command_runner.go:130] ! I0318 12:36:03.765326       1 main.go:227] handling current node
	I0318 12:44:47.047629    5712 command_runner.go:130] ! I0318 12:36:03.765343       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.047629    5712 command_runner.go:130] ! I0318 12:36:03.765351       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.047629    5712 command_runner.go:130] ! I0318 12:36:03.765883       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.047629    5712 command_runner.go:130] ! I0318 12:36:03.766028       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.047727    5712 command_runner.go:130] ! I0318 12:36:13.775221       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.047727    5712 command_runner.go:130] ! I0318 12:36:13.775396       1 main.go:227] handling current node
	I0318 12:44:47.047781    5712 command_runner.go:130] ! I0318 12:36:13.775430       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.047781    5712 command_runner.go:130] ! I0318 12:36:13.775502       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.047781    5712 command_runner.go:130] ! I0318 12:36:13.776058       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.047866    5712 command_runner.go:130] ! I0318 12:36:13.776177       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.047866    5712 command_runner.go:130] ! I0318 12:36:23.790073       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.047941    5712 command_runner.go:130] ! I0318 12:36:23.790179       1 main.go:227] handling current node
	I0318 12:44:47.047941    5712 command_runner.go:130] ! I0318 12:36:23.790195       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.047941    5712 command_runner.go:130] ! I0318 12:36:23.790207       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.047992    5712 command_runner.go:130] ! I0318 12:36:23.790761       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.047992    5712 command_runner.go:130] ! I0318 12:36:23.790798       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.047992    5712 command_runner.go:130] ! I0318 12:36:33.800116       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.048043    5712 command_runner.go:130] ! I0318 12:36:33.800240       1 main.go:227] handling current node
	I0318 12:44:47.048043    5712 command_runner.go:130] ! I0318 12:36:33.800256       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.048043    5712 command_runner.go:130] ! I0318 12:36:33.800265       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.048091    5712 command_runner.go:130] ! I0318 12:36:33.800837       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.048091    5712 command_runner.go:130] ! I0318 12:36:33.800858       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.048091    5712 command_runner.go:130] ! I0318 12:36:43.817961       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.048143    5712 command_runner.go:130] ! I0318 12:36:43.818115       1 main.go:227] handling current node
	I0318 12:44:47.048143    5712 command_runner.go:130] ! I0318 12:36:43.818132       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.048143    5712 command_runner.go:130] ! I0318 12:36:43.818146       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.048143    5712 command_runner.go:130] ! I0318 12:36:43.818537       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.048220    5712 command_runner.go:130] ! I0318 12:36:43.818661       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.048220    5712 command_runner.go:130] ! I0318 12:36:53.827340       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.048220    5712 command_runner.go:130] ! I0318 12:36:53.827385       1 main.go:227] handling current node
	I0318 12:44:47.048220    5712 command_runner.go:130] ! I0318 12:36:53.827398       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.048220    5712 command_runner.go:130] ! I0318 12:36:53.827406       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.048290    5712 command_runner.go:130] ! I0318 12:36:53.827787       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.048290    5712 command_runner.go:130] ! I0318 12:36:53.827885       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.048290    5712 command_runner.go:130] ! I0318 12:37:03.840761       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.048290    5712 command_runner.go:130] ! I0318 12:37:03.840837       1 main.go:227] handling current node
	I0318 12:44:47.048290    5712 command_runner.go:130] ! I0318 12:37:03.840851       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.048290    5712 command_runner.go:130] ! I0318 12:37:03.840859       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.048290    5712 command_runner.go:130] ! I0318 12:37:03.841285       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.048424    5712 command_runner.go:130] ! I0318 12:37:03.841319       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.048424    5712 command_runner.go:130] ! I0318 12:37:13.848127       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.048424    5712 command_runner.go:130] ! I0318 12:37:13.848174       1 main.go:227] handling current node
	I0318 12:44:47.048424    5712 command_runner.go:130] ! I0318 12:37:13.848188       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.048510    5712 command_runner.go:130] ! I0318 12:37:13.848195       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.048510    5712 command_runner.go:130] ! I0318 12:37:13.848630       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.048510    5712 command_runner.go:130] ! I0318 12:37:13.848646       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.048569    5712 command_runner.go:130] ! I0318 12:37:23.863745       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.048569    5712 command_runner.go:130] ! I0318 12:37:23.863916       1 main.go:227] handling current node
	I0318 12:44:47.048569    5712 command_runner.go:130] ! I0318 12:37:23.863950       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.048569    5712 command_runner.go:130] ! I0318 12:37:23.863996       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.048627    5712 command_runner.go:130] ! I0318 12:37:23.864419       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.048627    5712 command_runner.go:130] ! I0318 12:37:23.864510       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.048627    5712 command_runner.go:130] ! I0318 12:37:33.876214       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.048685    5712 command_runner.go:130] ! I0318 12:37:33.876331       1 main.go:227] handling current node
	I0318 12:44:47.048685    5712 command_runner.go:130] ! I0318 12:37:33.876347       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.048685    5712 command_runner.go:130] ! I0318 12:37:33.876355       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.048740    5712 command_runner.go:130] ! I0318 12:37:33.877021       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.048740    5712 command_runner.go:130] ! I0318 12:37:33.877100       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.048740    5712 command_runner.go:130] ! I0318 12:37:43.886399       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.048795    5712 command_runner.go:130] ! I0318 12:37:43.886544       1 main.go:227] handling current node
	I0318 12:44:47.048795    5712 command_runner.go:130] ! I0318 12:37:43.886626       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.048795    5712 command_runner.go:130] ! I0318 12:37:43.886636       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.048846    5712 command_runner.go:130] ! I0318 12:37:43.886872       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.048846    5712 command_runner.go:130] ! I0318 12:37:43.886890       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.048846    5712 command_runner.go:130] ! I0318 12:37:53.903761       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.048895    5712 command_runner.go:130] ! I0318 12:37:53.903845       1 main.go:227] handling current node
	I0318 12:44:47.048895    5712 command_runner.go:130] ! I0318 12:37:53.903871       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:37:53.903880       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:37:53.905033       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:37:53.905181       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:03.919532       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:03.919783       1 main.go:227] handling current node
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:03.919840       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:03.919894       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:03.920221       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:03.920390       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:13.927894       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:13.928004       1 main.go:227] handling current node
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:13.928022       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:13.928031       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:13.928232       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:13.928269       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:23.943692       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:23.943780       1 main.go:227] handling current node
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:23.943795       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:23.943804       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:23.944523       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:23.944596       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:33.952000       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:33.952098       1 main.go:227] handling current node
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:33.952114       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:33.952123       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:33.952466       1 main.go:223] Handling node with IPs: map[172.25.159.254:{}]
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:33.952503       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.2.0/24] 
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:43.965979       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:43.966101       1 main.go:227] handling current node
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:43.966117       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:43.966125       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:53.989210       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:53.989308       1 main.go:227] handling current node
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:53.989322       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:53.989373       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:53.989864       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:53.989957       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:38:53.990028       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.25.157.200 Flags: [] Table: 0} 
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:39:03.996429       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:39:03.996598       1 main.go:227] handling current node
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:39:03.996614       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:39:03.996623       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.048942    5712 command_runner.go:130] ! I0318 12:39:03.996739       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:47.049481    5712 command_runner.go:130] ! I0318 12:39:03.996753       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:47.049481    5712 command_runner.go:130] ! I0318 12:39:14.008318       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.049481    5712 command_runner.go:130] ! I0318 12:39:14.008384       1 main.go:227] handling current node
	I0318 12:44:47.049481    5712 command_runner.go:130] ! I0318 12:39:14.008398       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.049481    5712 command_runner.go:130] ! I0318 12:39:14.008405       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.049481    5712 command_runner.go:130] ! I0318 12:39:14.009080       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:47.049481    5712 command_runner.go:130] ! I0318 12:39:14.009179       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:47.049481    5712 command_runner.go:130] ! I0318 12:39:24.016154       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.049481    5712 command_runner.go:130] ! I0318 12:39:24.016315       1 main.go:227] handling current node
	I0318 12:44:47.049481    5712 command_runner.go:130] ! I0318 12:39:24.016330       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.049481    5712 command_runner.go:130] ! I0318 12:39:24.016338       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.049481    5712 command_runner.go:130] ! I0318 12:39:24.016842       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:47.049481    5712 command_runner.go:130] ! I0318 12:39:24.016875       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:47.049481    5712 command_runner.go:130] ! I0318 12:39:34.029061       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.049678    5712 command_runner.go:130] ! I0318 12:39:34.029159       1 main.go:227] handling current node
	I0318 12:44:47.049678    5712 command_runner.go:130] ! I0318 12:39:34.029175       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.049678    5712 command_runner.go:130] ! I0318 12:39:34.029184       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.049678    5712 command_runner.go:130] ! I0318 12:39:34.030103       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:39:34.030216       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:39:44.037921       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:39:44.037960       1 main.go:227] handling current node
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:39:44.037972       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:39:44.037981       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:39:44.038243       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:39:44.038318       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:39:54.057786       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:39:54.058021       1 main.go:227] handling current node
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:39:54.058100       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:39:54.058189       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:39:54.058376       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:39:54.058478       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:40:04.067119       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:40:04.067262       1 main.go:227] handling current node
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:40:04.067280       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:40:04.067289       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:40:04.067742       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:40:04.067846       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:40:14.082426       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:40:14.082921       1 main.go:227] handling current node
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:40:14.082946       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:40:14.082956       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:40:14.083174       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:40:14.083247       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:40:24.098060       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:40:24.098161       1 main.go:227] handling current node
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:40:24.098178       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:40:24.098187       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:40:24.098316       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:40:24.098324       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:40:34.335103       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.049738    5712 command_runner.go:130] ! I0318 12:40:34.335169       1 main.go:227] handling current node
	I0318 12:44:47.050291    5712 command_runner.go:130] ! I0318 12:40:34.335185       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.050291    5712 command_runner.go:130] ! I0318 12:40:34.335192       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.050345    5712 command_runner.go:130] ! I0318 12:40:34.335470       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:47.050345    5712 command_runner.go:130] ! I0318 12:40:34.335488       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:47.050345    5712 command_runner.go:130] ! I0318 12:40:44.342962       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:44:47.050345    5712 command_runner.go:130] ! I0318 12:40:44.343122       1 main.go:227] handling current node
	I0318 12:44:47.050345    5712 command_runner.go:130] ! I0318 12:40:44.343139       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.050345    5712 command_runner.go:130] ! I0318 12:40:44.343148       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.050345    5712 command_runner.go:130] ! I0318 12:40:44.343738       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:47.050345    5712 command_runner.go:130] ! I0318 12:40:44.343780       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:47.069992    5712 logs.go:123] Gathering logs for Docker ...
	I0318 12:44:47.069992    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 12:44:47.105334    5712 command_runner.go:130] > Mar 18 12:41:52 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 12:44:47.105334    5712 command_runner.go:130] > Mar 18 12:41:52 minikube cri-dockerd[219]: time="2024-03-18T12:41:52Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 12:44:47.105482    5712 command_runner.go:130] > Mar 18 12:41:52 minikube cri-dockerd[219]: time="2024-03-18T12:41:52Z" level=info msg="Start docker client with request timeout 0s"
	I0318 12:44:47.105482    5712 command_runner.go:130] > Mar 18 12:41:52 minikube cri-dockerd[219]: time="2024-03-18T12:41:52Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0318 12:44:47.105482    5712 command_runner.go:130] > Mar 18 12:41:52 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0318 12:44:47.105482    5712 command_runner.go:130] > Mar 18 12:41:52 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 12:44:47.105482    5712 command_runner.go:130] > Mar 18 12:41:52 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 12:44:47.105624    5712 command_runner.go:130] > Mar 18 12:41:55 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0318 12:44:47.105624    5712 command_runner.go:130] > Mar 18 12:41:55 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0318 12:44:47.105624    5712 command_runner.go:130] > Mar 18 12:41:55 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 12:44:47.105624    5712 command_runner.go:130] > Mar 18 12:41:55 minikube cri-dockerd[404]: time="2024-03-18T12:41:55Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 12:44:47.105624    5712 command_runner.go:130] > Mar 18 12:41:55 minikube cri-dockerd[404]: time="2024-03-18T12:41:55Z" level=info msg="Start docker client with request timeout 0s"
	I0318 12:44:47.105741    5712 command_runner.go:130] > Mar 18 12:41:55 minikube cri-dockerd[404]: time="2024-03-18T12:41:55Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0318 12:44:47.105741    5712 command_runner.go:130] > Mar 18 12:41:55 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0318 12:44:47.105741    5712 command_runner.go:130] > Mar 18 12:41:55 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 12:44:47.105741    5712 command_runner.go:130] > Mar 18 12:41:55 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 12:44:47.105741    5712 command_runner.go:130] > Mar 18 12:41:57 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0318 12:44:47.105843    5712 command_runner.go:130] > Mar 18 12:41:57 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0318 12:44:47.105843    5712 command_runner.go:130] > Mar 18 12:41:57 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 12:44:47.105843    5712 command_runner.go:130] > Mar 18 12:41:57 minikube cri-dockerd[424]: time="2024-03-18T12:41:57Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 12:44:47.105843    5712 command_runner.go:130] > Mar 18 12:41:57 minikube cri-dockerd[424]: time="2024-03-18T12:41:57Z" level=info msg="Start docker client with request timeout 0s"
	I0318 12:44:47.105843    5712 command_runner.go:130] > Mar 18 12:41:57 minikube cri-dockerd[424]: time="2024-03-18T12:41:57Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0318 12:44:47.105941    5712 command_runner.go:130] > Mar 18 12:41:57 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0318 12:44:47.105941    5712 command_runner.go:130] > Mar 18 12:41:57 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 12:44:47.105941    5712 command_runner.go:130] > Mar 18 12:41:57 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 12:44:47.105941    5712 command_runner.go:130] > Mar 18 12:41:59 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0318 12:44:47.105941    5712 command_runner.go:130] > Mar 18 12:41:59 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0318 12:44:47.105941    5712 command_runner.go:130] > Mar 18 12:41:59 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0318 12:44:47.105941    5712 command_runner.go:130] > Mar 18 12:41:59 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 12:44:47.105941    5712 command_runner.go:130] > Mar 18 12:41:59 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 12:44:47.106038    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 systemd[1]: Starting Docker Application Container Engine...
	I0318 12:44:47.106038    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[652]: time="2024-03-18T12:42:46.799415676Z" level=info msg="Starting up"
	I0318 12:44:47.106038    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[652]: time="2024-03-18T12:42:46.800442474Z" level=info msg="containerd not running, starting managed containerd"
	I0318 12:44:47.106038    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[652]: time="2024-03-18T12:42:46.801655972Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=658
	I0318 12:44:47.106134    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.836542309Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0318 12:44:47.106134    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.866837154Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0318 12:44:47.106134    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.866991653Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0318 12:44:47.106134    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.867166153Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0318 12:44:47.106134    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.867346253Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:47.106259    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.868353051Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:47.106259    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.868455451Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:47.106259    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.868755450Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:47.106259    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.868785850Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:47.106384    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.868803850Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0318 12:44:47.106384    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.868815950Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:47.106384    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.869407649Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:47.106384    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.870171948Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:47.106384    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.873462742Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:47.106502    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.873569242Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:47.106502    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.873718241Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:47.106502    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.873818241Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0318 12:44:47.106595    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.874315040Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0318 12:44:47.106624    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.874434440Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0318 12:44:47.106624    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.874453940Z" level=info msg="metadata content store policy set" policy=shared
	I0318 12:44:47.106624    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880096930Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0318 12:44:47.106624    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880252829Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0318 12:44:47.106725    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880377329Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0318 12:44:47.106725    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880397729Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0318 12:44:47.106725    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880414329Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0318 12:44:47.106725    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880488329Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0318 12:44:47.106839    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880819128Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0318 12:44:47.106839    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.880926428Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0318 12:44:47.106839    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881236528Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0318 12:44:47.106839    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881376427Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0318 12:44:47.106936    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881400527Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0318 12:44:47.106936    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881426127Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0318 12:44:47.106936    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881441527Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0318 12:44:47.106936    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881474927Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0318 12:44:47.107028    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881491327Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0318 12:44:47.107028    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881506427Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0318 12:44:47.107028    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881521027Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0318 12:44:47.107028    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881536227Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0318 12:44:47.107122    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881566927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.107122    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881586627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.107122    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881601327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.107122    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881617327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.107122    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881631227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.107122    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881646527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.107234    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881659427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.107234    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881673727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.107234    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881757827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.107234    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881783527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.107234    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881798027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.107351    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881812927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.107351    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881826827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.107351    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881844827Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0318 12:44:47.107351    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881868126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.107351    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881889326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.107537    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.881902926Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0318 12:44:47.107537    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882002626Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0318 12:44:47.107537    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882117726Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0318 12:44:47.107537    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882162226Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0318 12:44:47.107672    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882178726Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0318 12:44:47.107672    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882242626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.107672    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882337926Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0318 12:44:47.107732    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882358926Z" level=info msg="NRI interface is disabled by configuration."
	I0318 12:44:47.107789    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882603625Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0318 12:44:47.107789    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.882759725Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0318 12:44:47.107789    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.883033524Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0318 12:44:47.107868    5712 command_runner.go:130] > Mar 18 12:42:46 multinode-642600 dockerd[658]: time="2024-03-18T12:42:46.883153424Z" level=info msg="containerd successfully booted in 0.049971s"
	I0318 12:44:47.107868    5712 command_runner.go:130] > Mar 18 12:42:47 multinode-642600 dockerd[652]: time="2024-03-18T12:42:47.858472851Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0318 12:44:47.107868    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 dockerd[652]: time="2024-03-18T12:42:48.057442718Z" level=info msg="Loading containers: start."
	I0318 12:44:47.107931    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 dockerd[652]: time="2024-03-18T12:42:48.544395210Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0318 12:44:47.107931    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 dockerd[652]: time="2024-03-18T12:42:48.632442528Z" level=info msg="Loading containers: done."
	I0318 12:44:47.107931    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 dockerd[652]: time="2024-03-18T12:42:48.662805631Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	I0318 12:44:47.107931    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 dockerd[652]: time="2024-03-18T12:42:48.663682128Z" level=info msg="Daemon has completed initialization"
	I0318 12:44:47.107931    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 dockerd[652]: time="2024-03-18T12:42:48.725498031Z" level=info msg="API listen on [::]:2376"
	I0318 12:44:47.108025    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 systemd[1]: Started Docker Application Container Engine.
	I0318 12:44:47.108025    5712 command_runner.go:130] > Mar 18 12:42:48 multinode-642600 dockerd[652]: time="2024-03-18T12:42:48.725911430Z" level=info msg="API listen on /var/run/docker.sock"
	I0318 12:44:47.108025    5712 command_runner.go:130] > Mar 18 12:43:15 multinode-642600 systemd[1]: Stopping Docker Application Container Engine...
	I0318 12:44:47.108085    5712 command_runner.go:130] > Mar 18 12:43:15 multinode-642600 dockerd[652]: time="2024-03-18T12:43:15.631434936Z" level=info msg="Processing signal 'terminated'"
	I0318 12:44:47.108085    5712 command_runner.go:130] > Mar 18 12:43:15 multinode-642600 dockerd[652]: time="2024-03-18T12:43:15.633587433Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0318 12:44:47.108085    5712 command_runner.go:130] > Mar 18 12:43:15 multinode-642600 dockerd[652]: time="2024-03-18T12:43:15.634258932Z" level=info msg="Daemon shutdown complete"
	I0318 12:44:47.108085    5712 command_runner.go:130] > Mar 18 12:43:15 multinode-642600 dockerd[652]: time="2024-03-18T12:43:15.634450831Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0318 12:44:47.108085    5712 command_runner.go:130] > Mar 18 12:43:15 multinode-642600 dockerd[652]: time="2024-03-18T12:43:15.634476831Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0318 12:44:47.108085    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 systemd[1]: docker.service: Deactivated successfully.
	I0318 12:44:47.108085    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 systemd[1]: Stopped Docker Application Container Engine.
	I0318 12:44:47.108085    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 systemd[1]: Starting Docker Application Container Engine...
	I0318 12:44:47.108237    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:16.717087499Z" level=info msg="Starting up"
	I0318 12:44:47.108237    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:16.718262797Z" level=info msg="containerd not running, starting managed containerd"
	I0318 12:44:47.108237    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:16.719705495Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1048
	I0318 12:44:47.108237    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.754738639Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0318 12:44:47.108237    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784193992Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0318 12:44:47.108237    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784236292Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0318 12:44:47.108237    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784275292Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0318 12:44:47.108237    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784291492Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:47.108376    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784317492Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:47.108376    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784331992Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:47.108376    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784550091Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:47.108376    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784651691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:47.108376    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784673391Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0318 12:44:47.108499    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784704091Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:47.108499    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784764391Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:47.108499    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.784996290Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:47.108598    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.787641686Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:47.108598    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.787744286Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0318 12:44:47.108669    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.787950186Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 12:44:47.108727    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.788044886Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0318 12:44:47.108727    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.788091986Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0318 12:44:47.108727    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.788127185Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0318 12:44:47.108823    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.788138585Z" level=info msg="metadata content store policy set" policy=shared
	I0318 12:44:47.108875    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.789136284Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0318 12:44:47.108875    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.789269784Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0318 12:44:47.108875    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.789298984Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0318 12:44:47.108875    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.789320484Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.789342084Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.789644383Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.790600382Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.791760980Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.791832280Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.791851580Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.791866579Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.791880279Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.791969479Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.791989879Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792004479Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792018079Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792030379Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792042479Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792063279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792077879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792090579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792103979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792117779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792135679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792148379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792161279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792174179Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792188479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.108988    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792199579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.109529    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792211479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.109529    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792223379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.109529    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792238079Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0318 12:44:47.109529    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792261579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.109529    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792276079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.109623    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792287879Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0318 12:44:47.109623    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792337479Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0318 12:44:47.109623    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792356479Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792368079Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792380379Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792530178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792576778Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792591078Z" level=info msg="NRI interface is disabled by configuration."
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792811378Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.792927678Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.793108678Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:16 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:16.793160477Z" level=info msg="containerd successfully booted in 0.039931s"
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:17 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:17.767243919Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:17 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:17.800090666Z" level=info msg="Loading containers: start."
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:18 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:18.103803081Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:18 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:18.187726546Z" level=info msg="Loading containers: done."
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:18 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:18.216487100Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:18 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:18.216648600Z" level=info msg="Daemon has completed initialization"
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:18 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:18.271691012Z" level=info msg="API listen on /var/run/docker.sock"
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:18 multinode-642600 dockerd[1042]: time="2024-03-18T12:43:18.271966711Z" level=info msg="API listen on [::]:2376"
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:18 multinode-642600 systemd[1]: Started Docker Application Container Engine.
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Start docker client with request timeout 0s"
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0318 12:44:47.109688    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Loaded network plugin cni"
	I0318 12:44:47.110224    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0318 12:44:47.110344    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Docker Info: &{ID:aa9100d3-1595-41ce-b36f-06932aef3ecb Containers:18 ContainersRunning:0 ContainersPaused:0 ContainersStopped:18 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:26 OomKillDisable:false NGoroutines:53 SystemTime:2024-03-18T12:43:19.415553382Z LoggingDriver:json-file CgroupDriver:cgroupfs CgroupVersion:2 NEventsListener:0 Ke
rnelVersion:5.10.207 OperatingSystem:Buildroot 2023.02.9 OSVersion:2023.02.9 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0002da070 NCPU:2 MemTotal:2216210432 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:multinode-642600 Labels:[provider=hyperv] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dcf2847247e18caba8dce86522029642f60fe96b Expected:dcf2847247e18caba8dce86522029642f60fe96b} RuncCommit:{ID:51d5e94601ceffbbd85688df1c928ecccbfa4685 Expected:51d5e94601ceffbbd85688df1c928ecccbfa4685} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[nam
e=seccomp,profile=builtin name=cgroupns] ProductLicense:Community Engine DefaultAddressPools:[] Warnings:[]}"
	I0318 12:44:47.110344    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0318 12:44:47.110418    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0318 12:44:47.110418    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0318 12:44:47.110418    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:19Z" level=info msg="Start cri-dockerd grpc backend"
	I0318 12:44:47.110484    5712 command_runner.go:130] > Mar 18 12:43:19 multinode-642600 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0318 12:44:47.110484    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:24Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-5dd5756b68-fgn7v_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"ed38da653fbefea9aeb0ebdb91f985394a7a792571704a4875018f5a6bc9abda\""
	I0318 12:44:47.110554    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:24Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-5b5d89c9d6-48qkw_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"29bb4d534c2e2b00dfe907d4443637851e3c3348e31bf00939cd6efad71c4e2e\""
	I0318 12:44:47.110554    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.316277241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:47.110554    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.317878239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:47.110554    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.318571937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.110554    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.319101537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.110554    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.356638277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:47.110554    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.356750476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:47.110554    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.356767376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.110554    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.357118676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.110554    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.418245378Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:47.110554    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.421018274Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:47.110554    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.421217073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.110554    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.422102972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.110554    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.428274662Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:47.110554    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.428365762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:47.110554    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.428455862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.110554    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.428580261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.110554    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/67004ee038ee4247f6f751987304426067a63cee8c1636408dd16efea728ba78/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:47.110554    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f62197122538f83943df8b19710794ea6ea9a9ffa884082a1a62435e9b152c3f/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:47.110554    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/eca6768355c74817c50b811b96b5fcc93a181c4968c53d4d4b0d0252ff6dbd0a/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:47.110554    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a7281d6e698ea2dc42d7d3093ccde32b770bf8367fdb58230694380f40daeb9f/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:47.110554    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.879224940Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:47.111092    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.879310840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:47.111092    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.879325040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.111092    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:25.879857239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.111092    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.050226267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:47.111092    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.051715465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:47.111231    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.056267457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.111231    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.056729856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.111333    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.064877643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:47.111372    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.065332743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:47.111372    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.065495042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.111372    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.065849742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.111519    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.091573301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:47.111519    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.091639201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:47.111519    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.091652401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.111519    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:26.091761800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.111519    5712 command_runner.go:130] > Mar 18 12:43:30 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:30Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0318 12:44:47.111624    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.923135971Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:47.111624    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.924017669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:47.111624    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.924165569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.111746    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.924385369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.111746    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.955673419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:47.111746    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.955753819Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:47.111746    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.955772119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.111855    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.956168818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.111855    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.964148405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:47.111913    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.964256705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:47.111913    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.964669604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.111964    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:31.964999404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.111964    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7a2f0ccaf5c4c6c0019124eda20c358dfa8aa20f0c92ade10aa3de32608e3527/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:47.111964    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/889c16eb0ab731956d02a28d0337dc6ff349dc574ba10d4fc1a939fb2e09d6d3/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:47.112058    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.391303322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:47.112058    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.391389722Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:47.112058    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.391408822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.112130    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.391535621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.112171    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.413113087Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:47.112210    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.413460286Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:47.112210    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.413726486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.112210    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.414492285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.112287    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:43:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5ecbdcbdad3fa79af8ef70896ae67d65b14c47b5811078c5d6d167e0f295d1bc/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:47.112287    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.850170088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:47.112353    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.850431387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:47.112353    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.850449987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.112405    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 dockerd[1048]: time="2024-03-18T12:43:32.850590387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.112405    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:03.011137468Z" level=info msg="shim disconnected" id=787ade2ea2cd019bbcde5523b750659f59b0187d9de4b757fba8a14f9a126460 namespace=moby
	I0318 12:44:47.112405    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:03.011334567Z" level=warning msg="cleaning up after shim disconnected" id=787ade2ea2cd019bbcde5523b750659f59b0187d9de4b757fba8a14f9a126460 namespace=moby
	I0318 12:44:47.112498    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:03.011364567Z" level=info msg="cleaning up dead shim" namespace=moby
	I0318 12:44:47.112498    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 dockerd[1042]: time="2024-03-18T12:44:03.012148165Z" level=info msg="ignoring event" container=787ade2ea2cd019bbcde5523b750659f59b0187d9de4b757fba8a14f9a126460 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0318 12:44:47.112534    5712 command_runner.go:130] > Mar 18 12:44:17 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:17.562340104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:47.112578    5712 command_runner.go:130] > Mar 18 12:44:17 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:17.562524303Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:47.112638    5712 command_runner.go:130] > Mar 18 12:44:17 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:17.562584503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.112638    5712 command_runner.go:130] > Mar 18 12:44:17 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:17.563253802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.112707    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.376262769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:47.112733    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.376780468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:47.112733    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.377021468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.112789    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.377223268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.112826    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:44:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1090dd57409807a15613607fd810b67863a9dd9c5a8512d7a6720906641c7f26/resolv.conf as [nameserver 172.25.144.1]"
	I0318 12:44:47.112826    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.684170919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:47.112826    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.684458920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:47.112890    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.684558520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.112932    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.685142822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.112979    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.901354745Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:47.113018    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.901518146Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:47.113018    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.901538746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.113018    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:35.901651446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.113018    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 cri-dockerd[1266]: time="2024-03-18T12:44:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e1b2432b0ed66a1175586c13232eb9b9239f18a4f9a86e2a0c5f48c1407fdb14/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0318 12:44:47.113018    5712 command_runner.go:130] > Mar 18 12:44:36 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:36.227440411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 12:44:47.113018    5712 command_runner.go:130] > Mar 18 12:44:36 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:36.227939926Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 12:44:47.113018    5712 command_runner.go:130] > Mar 18 12:44:36 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:36.228081131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.113018    5712 command_runner.go:130] > Mar 18 12:44:36 multinode-642600 dockerd[1048]: time="2024-03-18T12:44:36.228507343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 12:44:47.113018    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113018    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113018    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113018    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113018    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113018    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113018    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113018    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113018    5712 command_runner.go:130] > Mar 18 12:44:39 multinode-642600 dockerd[1042]: 2024/03/18 12:44:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113018    5712 command_runner.go:130] > Mar 18 12:44:40 multinode-642600 dockerd[1042]: 2024/03/18 12:44:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113555    5712 command_runner.go:130] > Mar 18 12:44:40 multinode-642600 dockerd[1042]: 2024/03/18 12:44:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113555    5712 command_runner.go:130] > Mar 18 12:44:40 multinode-642600 dockerd[1042]: 2024/03/18 12:44:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113555    5712 command_runner.go:130] > Mar 18 12:44:43 multinode-642600 dockerd[1042]: 2024/03/18 12:44:43 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113555    5712 command_runner.go:130] > Mar 18 12:44:43 multinode-642600 dockerd[1042]: 2024/03/18 12:44:43 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113687    5712 command_runner.go:130] > Mar 18 12:44:43 multinode-642600 dockerd[1042]: 2024/03/18 12:44:43 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113687    5712 command_runner.go:130] > Mar 18 12:44:43 multinode-642600 dockerd[1042]: 2024/03/18 12:44:43 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113687    5712 command_runner.go:130] > Mar 18 12:44:43 multinode-642600 dockerd[1042]: 2024/03/18 12:44:43 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113687    5712 command_runner.go:130] > Mar 18 12:44:43 multinode-642600 dockerd[1042]: 2024/03/18 12:44:43 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113687    5712 command_runner.go:130] > Mar 18 12:44:43 multinode-642600 dockerd[1042]: 2024/03/18 12:44:43 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113687    5712 command_runner.go:130] > Mar 18 12:44:43 multinode-642600 dockerd[1042]: 2024/03/18 12:44:43 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113687    5712 command_runner.go:130] > Mar 18 12:44:44 multinode-642600 dockerd[1042]: 2024/03/18 12:44:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113687    5712 command_runner.go:130] > Mar 18 12:44:44 multinode-642600 dockerd[1042]: 2024/03/18 12:44:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113687    5712 command_runner.go:130] > Mar 18 12:44:44 multinode-642600 dockerd[1042]: 2024/03/18 12:44:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113687    5712 command_runner.go:130] > Mar 18 12:44:44 multinode-642600 dockerd[1042]: 2024/03/18 12:44:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.113687    5712 command_runner.go:130] > Mar 18 12:44:47 multinode-642600 dockerd[1042]: 2024/03/18 12:44:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 12:44:47.149837    5712 logs.go:123] Gathering logs for kubelet ...
	I0318 12:44:47.150876    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 12:44:47.183095    5712 command_runner.go:130] > Mar 18 12:43:20 multinode-642600 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0318 12:44:47.183584    5712 command_runner.go:130] > Mar 18 12:43:20 multinode-642600 kubelet[1388]: I0318 12:43:20.841405    1388 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0318 12:44:47.183584    5712 command_runner.go:130] > Mar 18 12:43:20 multinode-642600 kubelet[1388]: I0318 12:43:20.841736    1388 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:47.183584    5712 command_runner.go:130] > Mar 18 12:43:20 multinode-642600 kubelet[1388]: I0318 12:43:20.842325    1388 server.go:895] "Client rotation is on, will bootstrap in background"
	I0318 12:44:47.183584    5712 command_runner.go:130] > Mar 18 12:43:20 multinode-642600 kubelet[1388]: E0318 12:43:20.842583    1388 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0318 12:44:47.183705    5712 command_runner.go:130] > Mar 18 12:43:20 multinode-642600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0318 12:44:47.183705    5712 command_runner.go:130] > Mar 18 12:43:20 multinode-642600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0318 12:44:47.183705    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0318 12:44:47.183705    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0318 12:44:47.183831    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0318 12:44:47.183831    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 kubelet[1445]: I0318 12:43:21.629315    1445 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0318 12:44:47.183831    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 kubelet[1445]: I0318 12:43:21.629808    1445 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:47.183831    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 kubelet[1445]: I0318 12:43:21.631096    1445 server.go:895] "Client rotation is on, will bootstrap in background"
	I0318 12:44:47.183962    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 kubelet[1445]: E0318 12:43:21.631229    1445 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0318 12:44:47.183962    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0318 12:44:47.183962    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0318 12:44:47.183962    5712 command_runner.go:130] > Mar 18 12:43:21 multinode-642600 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0318 12:44:47.183962    5712 command_runner.go:130] > Mar 18 12:43:23 multinode-642600 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0318 12:44:47.183962    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.100950    1523 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0318 12:44:47.184073    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.101311    1523 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:47.184073    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.101646    1523 server.go:895] "Client rotation is on, will bootstrap in background"
	I0318 12:44:47.184073    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.108175    1523 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0318 12:44:47.184073    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.123413    1523 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 12:44:47.184303    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.204504    1523 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0318 12:44:47.184303    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.205069    1523 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0318 12:44:47.184408    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.205344    1523 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","To
pologyManagerPolicyOptions":null}
	I0318 12:44:47.184408    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.205667    1523 topology_manager.go:138] "Creating topology manager with none policy"
	I0318 12:44:47.184408    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.205685    1523 container_manager_linux.go:301] "Creating device plugin manager"
	I0318 12:44:47.184408    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.206240    1523 state_mem.go:36] "Initialized new in-memory state store"
	I0318 12:44:47.184491    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.208674    1523 kubelet.go:393] "Attempting to sync node with API server"
	I0318 12:44:47.184528    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.208817    1523 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0318 12:44:47.184528    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.209351    1523 kubelet.go:309] "Adding apiserver pod source"
	I0318 12:44:47.184528    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.209491    1523 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0318 12:44:47.184597    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: W0318 12:43:24.212857    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-642600&limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:47.184597    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.213311    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-642600&limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:47.184597    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: W0318 12:43:24.219866    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:47.184684    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.220057    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:47.184719    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.240215    1523 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="docker" version="25.0.4" apiVersion="v1"
	I0318 12:44:47.184747    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: W0318 12:43:24.245761    1523 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0318 12:44:47.184747    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.248742    1523 server.go:1232] "Started kubelet"
	I0318 12:44:47.184747    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.249814    1523 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
	I0318 12:44:47.184747    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.251561    1523 server.go:462] "Adding debug handlers to kubelet server"
	I0318 12:44:47.184747    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.254285    1523 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10
	I0318 12:44:47.184747    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.255480    1523 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0318 12:44:47.184747    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.255659    1523 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"multinode-642600.17bddc6f5820f7a9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-642600", UID:"multinode-642600", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"multinode-642600"}, FirstTimestamp:time.Date(2024, ti
me.March, 18, 12, 43, 24, 248692649, time.Local), LastTimestamp:time.Date(2024, time.March, 18, 12, 43, 24, 248692649, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"multinode-642600"}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 172.25.148.129:8443: connect: connection refused'(may retry after sleeping)
	I0318 12:44:47.184747    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.259469    1523 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0318 12:44:47.184747    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.261490    1523 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0318 12:44:47.184747    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.265275    1523 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
	I0318 12:44:47.184747    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.270368    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-642600?timeout=10s\": dial tcp 172.25.148.129:8443: connect: connection refused" interval="200ms"
	I0318 12:44:47.184747    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: W0318 12:43:24.275611    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:47.184747    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.275814    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:47.184747    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.317069    1523 reconciler_new.go:29] "Reconciler: start to sync state"
	I0318 12:44:47.184747    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.327943    1523 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0318 12:44:47.184747    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.327963    1523 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0318 12:44:47.184747    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.327985    1523 state_mem.go:36] "Initialized new in-memory state store"
	I0318 12:44:47.184747    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.329007    1523 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0318 12:44:47.185367    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.329047    1523 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0318 12:44:47.185367    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.329057    1523 policy_none.go:49] "None policy: Start"
	I0318 12:44:47.185367    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.336597    1523 memory_manager.go:169] "Starting memorymanager" policy="None"
	I0318 12:44:47.185367    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.336631    1523 state_mem.go:35] "Initializing new in-memory state store"
	I0318 12:44:47.185367    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.337548    1523 state_mem.go:75] "Updated machine memory state"
	I0318 12:44:47.185367    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.345495    1523 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0318 12:44:47.185367    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.348154    1523 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0318 12:44:47.185559    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.351399    1523 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0318 12:44:47.185559    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.355603    1523 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0318 12:44:47.185559    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.356232    1523 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0318 12:44:47.185559    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.357037    1523 kubelet.go:2303] "Starting kubelet main sync loop"
	I0318 12:44:47.185660    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.359069    1523 kubelet.go:2327] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0318 12:44:47.185660    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: W0318 12:43:24.367050    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:47.185660    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.367230    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:47.185660    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.387242    1523 iptables.go:575] "Could not set up iptables canary" err=<
	I0318 12:44:47.185660    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0318 12:44:47.185818    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0318 12:44:47.185818    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0318 12:44:47.185818    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0318 12:44:47.185905    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.387428    1523 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-642600\" not found"
	I0318 12:44:47.185905    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.399151    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-642600"
	I0318 12:44:47.185905    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.399841    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.148.129:8443: connect: connection refused" node="multinode-642600"
	I0318 12:44:47.185977    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.460339    1523 topology_manager.go:215] "Topology Admit Handler" podUID="d5f09afee1a6ef36657c1ae3335ddda6" podNamespace="kube-system" podName="etcd-multinode-642600"
	I0318 12:44:47.185977    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.472389    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-642600?timeout=10s\": dial tcp 172.25.148.129:8443: connect: connection refused" interval="400ms"
	I0318 12:44:47.186119    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.474475    1523 topology_manager.go:215] "Topology Admit Handler" podUID="624de65f019baf96d4a0e2fb6064e413" podNamespace="kube-system" podName="kube-apiserver-multinode-642600"
	I0318 12:44:47.186147    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.487469    1523 topology_manager.go:215] "Topology Admit Handler" podUID="a1608bc774d0b3e96e1b6fbbded5cb52" podNamespace="kube-system" podName="kube-controller-manager-multinode-642600"
	I0318 12:44:47.186147    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.500311    1523 topology_manager.go:215] "Topology Admit Handler" podUID="cf50844b540be8ed0b3e767db413ac8f" podNamespace="kube-system" podName="kube-scheduler-multinode-642600"
	I0318 12:44:47.186147    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.527553    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/d5f09afee1a6ef36657c1ae3335ddda6-etcd-certs\") pod \"etcd-multinode-642600\" (UID: \"d5f09afee1a6ef36657c1ae3335ddda6\") " pod="kube-system/etcd-multinode-642600"
	I0318 12:44:47.186147    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.527604    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/d5f09afee1a6ef36657c1ae3335ddda6-etcd-data\") pod \"etcd-multinode-642600\" (UID: \"d5f09afee1a6ef36657c1ae3335ddda6\") " pod="kube-system/etcd-multinode-642600"
	I0318 12:44:47.186147    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.534726    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed38da653fbefea9aeb0ebdb91f985394a7a792571704a4875018f5a6bc9abda"
	I0318 12:44:47.186147    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.534857    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d766c4514f0bf79902b72d04d9e3a09fc2bcf5ef330f41cd3e84e63f5151f2b6"
	I0318 12:44:47.186147    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.534873    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f100b1062a56929e04e6e4377055b065d93a28c504f060cce4695165a2c33db0"
	I0318 12:44:47.186147    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.534885    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a9b4c05a5ccd5364b8dac2797803c98520c4f98df0fba77af7521af64a15152"
	I0318 12:44:47.186147    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.534943    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f4709a3a45a45f0c67f457df8bb202ea2867cfedeaec4a164509190df13f510"
	I0318 12:44:47.186147    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.534961    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3500a9f1ca84ed3d58cdd473a0c7c47a59643858c05dfd90247a09b1b43302bd"
	I0318 12:44:47.186147    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.552869    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aad98ae0cd7c7708c7e02f0b23fc33f1ca2b404bd7fec324c21beefcbe17d009"
	I0318 12:44:47.186147    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.571969    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29bb4d534c2e2b00dfe907d4443637851e3c3348e31bf00939cd6efad71c4e2e"
	I0318 12:44:47.186147    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.589127    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fef37141be6db2ba71fd0f1d2feee00d6ab5d31d607323e4f5ffab4a3e70cfa5"
	I0318 12:44:47.186147    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.614112    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-642600"
	I0318 12:44:47.186147    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.616006    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.148.129:8443: connect: connection refused" node="multinode-642600"
	I0318 12:44:47.186702    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629143    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a1608bc774d0b3e96e1b6fbbded5cb52-flexvolume-dir\") pod \"kube-controller-manager-multinode-642600\" (UID: \"a1608bc774d0b3e96e1b6fbbded5cb52\") " pod="kube-system/kube-controller-manager-multinode-642600"
	I0318 12:44:47.186793    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629404    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a1608bc774d0b3e96e1b6fbbded5cb52-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-642600\" (UID: \"a1608bc774d0b3e96e1b6fbbded5cb52\") " pod="kube-system/kube-controller-manager-multinode-642600"
	I0318 12:44:47.186880    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629689    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/624de65f019baf96d4a0e2fb6064e413-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-642600\" (UID: \"624de65f019baf96d4a0e2fb6064e413\") " pod="kube-system/kube-apiserver-multinode-642600"
	I0318 12:44:47.186880    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629754    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a1608bc774d0b3e96e1b6fbbded5cb52-ca-certs\") pod \"kube-controller-manager-multinode-642600\" (UID: \"a1608bc774d0b3e96e1b6fbbded5cb52\") " pod="kube-system/kube-controller-manager-multinode-642600"
	I0318 12:44:47.186947    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629780    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a1608bc774d0b3e96e1b6fbbded5cb52-k8s-certs\") pod \"kube-controller-manager-multinode-642600\" (UID: \"a1608bc774d0b3e96e1b6fbbded5cb52\") " pod="kube-system/kube-controller-manager-multinode-642600"
	I0318 12:44:47.187032    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629802    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1608bc774d0b3e96e1b6fbbded5cb52-kubeconfig\") pod \"kube-controller-manager-multinode-642600\" (UID: \"a1608bc774d0b3e96e1b6fbbded5cb52\") " pod="kube-system/kube-controller-manager-multinode-642600"
	I0318 12:44:47.187032    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629825    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cf50844b540be8ed0b3e767db413ac8f-kubeconfig\") pod \"kube-scheduler-multinode-642600\" (UID: \"cf50844b540be8ed0b3e767db413ac8f\") " pod="kube-system/kube-scheduler-multinode-642600"
	I0318 12:44:47.187126    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629860    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/624de65f019baf96d4a0e2fb6064e413-ca-certs\") pod \"kube-apiserver-multinode-642600\" (UID: \"624de65f019baf96d4a0e2fb6064e413\") " pod="kube-system/kube-apiserver-multinode-642600"
	I0318 12:44:47.187126    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: I0318 12:43:24.629919    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/624de65f019baf96d4a0e2fb6064e413-k8s-certs\") pod \"kube-apiserver-multinode-642600\" (UID: \"624de65f019baf96d4a0e2fb6064e413\") " pod="kube-system/kube-apiserver-multinode-642600"
	I0318 12:44:47.187202    5712 command_runner.go:130] > Mar 18 12:43:24 multinode-642600 kubelet[1523]: E0318 12:43:24.875125    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-642600?timeout=10s\": dial tcp 172.25.148.129:8443: connect: connection refused" interval="800ms"
	I0318 12:44:47.187202    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: I0318 12:43:25.030740    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-642600"
	I0318 12:44:47.187202    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: E0318 12:43:25.031776    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.148.129:8443: connect: connection refused" node="multinode-642600"
	I0318 12:44:47.187330    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: W0318 12:43:25.266849    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:47.187330    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: E0318 12:43:25.266980    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:47.187330    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: I0318 12:43:25.674768    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7281d6e698ea2dc42d7d3093ccde32b770bf8367fdb58230694380f40daeb9f"
	I0318 12:44:47.187330    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: E0318 12:43:25.676706    1523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-642600?timeout=10s\": dial tcp 172.25.148.129:8443: connect: connection refused" interval="1.6s"
	I0318 12:44:47.187330    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: I0318 12:43:25.692553    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eca6768355c74817c50b811b96b5fcc93a181c4968c53d4d4b0d0252ff6dbd0a"
	I0318 12:44:47.187549    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: W0318 12:43:25.700976    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:47.187549    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: E0318 12:43:25.701062    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:47.187549    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: I0318 12:43:25.708111    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f62197122538f83943df8b19710794ea6ea9a9ffa884082a1a62435e9b152c3f"
	I0318 12:44:47.187549    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: W0318 12:43:25.731607    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:47.187671    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: E0318 12:43:25.731695    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:47.187671    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: W0318 12:43:25.790774    1523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-642600&limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:47.187751    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: E0318 12:43:25.790867    1523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-642600&limit=500&resourceVersion=0": dial tcp 172.25.148.129:8443: connect: connection refused
	I0318 12:44:47.187751    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: I0318 12:43:25.868581    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-642600"
	I0318 12:44:47.187829    5712 command_runner.go:130] > Mar 18 12:43:25 multinode-642600 kubelet[1523]: E0318 12:43:25.869663    1523 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.148.129:8443: connect: connection refused" node="multinode-642600"
	I0318 12:44:47.187906    5712 command_runner.go:130] > Mar 18 12:43:26 multinode-642600 kubelet[1523]: E0318 12:43:26.129309    1523 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"multinode-642600.17bddc6f5820f7a9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-642600", UID:"multinode-642600", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"multinode-642600"}, FirstTimestamp:time.Date(2024, ti
me.March, 18, 12, 43, 24, 248692649, time.Local), LastTimestamp:time.Date(2024, time.March, 18, 12, 43, 24, 248692649, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"multinode-642600"}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 172.25.148.129:8443: connect: connection refused'(may retry after sleeping)
	I0318 12:44:47.187906    5712 command_runner.go:130] > Mar 18 12:43:27 multinode-642600 kubelet[1523]: I0318 12:43:27.488157    1523 kubelet_node_status.go:70] "Attempting to register node" node="multinode-642600"
	I0318 12:44:47.187906    5712 command_runner.go:130] > Mar 18 12:43:30 multinode-642600 kubelet[1523]: I0318 12:43:30.626198    1523 kubelet_node_status.go:108] "Node was previously registered" node="multinode-642600"
	I0318 12:44:47.188003    5712 command_runner.go:130] > Mar 18 12:43:30 multinode-642600 kubelet[1523]: I0318 12:43:30.626989    1523 kubelet_node_status.go:73] "Successfully registered node" node="multinode-642600"
	I0318 12:44:47.188003    5712 command_runner.go:130] > Mar 18 12:43:30 multinode-642600 kubelet[1523]: I0318 12:43:30.640050    1523 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0318 12:44:47.188003    5712 command_runner.go:130] > Mar 18 12:43:30 multinode-642600 kubelet[1523]: I0318 12:43:30.642279    1523 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0318 12:44:47.188081    5712 command_runner.go:130] > Mar 18 12:43:30 multinode-642600 kubelet[1523]: I0318 12:43:30.658382    1523 setters.go:552] "Node became not ready" node="multinode-642600" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-03-18T12:43:30Z","lastTransitionTime":"2024-03-18T12:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0318 12:44:47.188081    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.223393    1523 apiserver.go:52] "Watching apiserver"
	I0318 12:44:47.188081    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.230566    1523 topology_manager.go:215] "Topology Admit Handler" podUID="acd9d7a0-0e27-4bbb-8562-6fbf374742ca" podNamespace="kube-system" podName="kindnet-kpt4f"
	I0318 12:44:47.188081    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.231421    1523 topology_manager.go:215] "Topology Admit Handler" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b" podNamespace="kube-system" podName="coredns-5dd5756b68-fgn7v"
	I0318 12:44:47.188312    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.231644    1523 topology_manager.go:215] "Topology Admit Handler" podUID="449242c2-ad12-4da5-b339-3be7ab8a9b16" podNamespace="kube-system" podName="kube-proxy-4dg79"
	I0318 12:44:47.188312    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.231779    1523 topology_manager.go:215] "Topology Admit Handler" podUID="d2718b8a-26a9-4c86-bf9a-221d1ee23ceb" podNamespace="kube-system" podName="storage-provisioner"
	I0318 12:44:47.188312    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.231939    1523 topology_manager.go:215] "Topology Admit Handler" podUID="45969c0e-ac43-459e-95c0-86f7b76947db" podNamespace="default" podName="busybox-5b5d89c9d6-48qkw"
	I0318 12:44:47.188421    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.232191    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:47.188421    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.233435    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:47.188421    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.235227    1523 kubelet.go:1872] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-642600" podUID="4aa98cb9-f6ab-40b3-8c15-235ba4e09985"
	I0318 12:44:47.188506    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.236365    1523 kubelet.go:1872] "Trying to delete pod" pod="kube-system/etcd-multinode-642600" podUID="237133d7-6f1a-42ee-8cf2-a2d7564d67fc"
	I0318 12:44:47.188506    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.266715    1523 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	I0318 12:44:47.188589    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.289094    1523 kubelet.go:1877] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-642600"
	I0318 12:44:47.188671    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.301996    1523 kubelet.go:1877] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-642600"
	I0318 12:44:47.188671    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.322408    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/449242c2-ad12-4da5-b339-3be7ab8a9b16-lib-modules\") pod \"kube-proxy-4dg79\" (UID: \"449242c2-ad12-4da5-b339-3be7ab8a9b16\") " pod="kube-system/kube-proxy-4dg79"
	I0318 12:44:47.188671    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.322793    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/acd9d7a0-0e27-4bbb-8562-6fbf374742ca-xtables-lock\") pod \"kindnet-kpt4f\" (UID: \"acd9d7a0-0e27-4bbb-8562-6fbf374742ca\") " pod="kube-system/kindnet-kpt4f"
	I0318 12:44:47.188775    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.323081    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d2718b8a-26a9-4c86-bf9a-221d1ee23ceb-tmp\") pod \"storage-provisioner\" (UID: \"d2718b8a-26a9-4c86-bf9a-221d1ee23ceb\") " pod="kube-system/storage-provisioner"
	I0318 12:44:47.188775    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.323213    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/acd9d7a0-0e27-4bbb-8562-6fbf374742ca-cni-cfg\") pod \"kindnet-kpt4f\" (UID: \"acd9d7a0-0e27-4bbb-8562-6fbf374742ca\") " pod="kube-system/kindnet-kpt4f"
	I0318 12:44:47.188775    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.323245    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/449242c2-ad12-4da5-b339-3be7ab8a9b16-xtables-lock\") pod \"kube-proxy-4dg79\" (UID: \"449242c2-ad12-4da5-b339-3be7ab8a9b16\") " pod="kube-system/kube-proxy-4dg79"
	I0318 12:44:47.188891    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.323294    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/acd9d7a0-0e27-4bbb-8562-6fbf374742ca-lib-modules\") pod \"kindnet-kpt4f\" (UID: \"acd9d7a0-0e27-4bbb-8562-6fbf374742ca\") " pod="kube-system/kindnet-kpt4f"
	I0318 12:44:47.188891    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.324469    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 12:44:47.189001    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.324580    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume podName:7bc52797-b4bd-4046-b3d5-fae9c8ccd13b nodeName:}" failed. No retries permitted until 2024-03-18 12:43:31.824540428 +0000 UTC m=+7.835780164 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume") pod "coredns-5dd5756b68-fgn7v" (UID: "7bc52797-b4bd-4046-b3d5-fae9c8ccd13b") : object "kube-system"/"coredns" not registered
	I0318 12:44:47.189001    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.339515    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:47.189001    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.339554    1523 projected.go:198] Error preparing data for projected volume kube-api-access-9g8n5 for pod default/busybox-5b5d89c9d6-48qkw: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:47.189001    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.339661    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5 podName:45969c0e-ac43-459e-95c0-86f7b76947db nodeName:}" failed. No retries permitted until 2024-03-18 12:43:31.839645304 +0000 UTC m=+7.850885040 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9g8n5" (UniqueName: "kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5") pod "busybox-5b5d89c9d6-48qkw" (UID: "45969c0e-ac43-459e-95c0-86f7b76947db") : object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:47.189124    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.384452    1523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-multinode-642600" podStartSLOduration=0.384368133 podCreationTimestamp="2024-03-18 12:43:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-18 12:43:31.360389871 +0000 UTC m=+7.371629607" watchObservedRunningTime="2024-03-18 12:43:31.384368133 +0000 UTC m=+7.395607769"
	I0318 12:44:47.189199    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: I0318 12:43:31.431280    1523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-642600" podStartSLOduration=0.431225058 podCreationTimestamp="2024-03-18 12:43:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-18 12:43:31.388015127 +0000 UTC m=+7.399254863" watchObservedRunningTime="2024-03-18 12:43:31.431225058 +0000 UTC m=+7.442464794"
	I0318 12:44:47.189237    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.828430    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 12:44:47.189237    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.828605    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume podName:7bc52797-b4bd-4046-b3d5-fae9c8ccd13b nodeName:}" failed. No retries permitted until 2024-03-18 12:43:32.828568222 +0000 UTC m=+8.839807858 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume") pod "coredns-5dd5756b68-fgn7v" (UID: "7bc52797-b4bd-4046-b3d5-fae9c8ccd13b") : object "kube-system"/"coredns" not registered
	I0318 12:44:47.189326    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.930285    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:47.189326    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.930420    1523 projected.go:198] Error preparing data for projected volume kube-api-access-9g8n5 for pod default/busybox-5b5d89c9d6-48qkw: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:47.189326    5712 command_runner.go:130] > Mar 18 12:43:31 multinode-642600 kubelet[1523]: E0318 12:43:31.930532    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5 podName:45969c0e-ac43-459e-95c0-86f7b76947db nodeName:}" failed. No retries permitted until 2024-03-18 12:43:32.930496159 +0000 UTC m=+8.941735795 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-9g8n5" (UniqueName: "kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5") pod "busybox-5b5d89c9d6-48qkw" (UID: "45969c0e-ac43-459e-95c0-86f7b76947db") : object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:47.189435    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: I0318 12:43:32.133795    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="889c16eb0ab731956d02a28d0337dc6ff349dc574ba10d4fc1a939fb2e09d6d3"
	I0318 12:44:47.189435    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: I0318 12:43:32.147805    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a2f0ccaf5c4c6c0019124eda20c358dfa8aa20f0c92ade10aa3de32608e3527"
	I0318 12:44:47.189435    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: I0318 12:43:32.369742    1523 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d04d3e415061983b742e6c14f1a5f562" path="/var/lib/kubelet/pods/d04d3e415061983b742e6c14f1a5f562/volumes"
	I0318 12:44:47.189536    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: I0318 12:43:32.371223    1523 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ec96a596e22f5afedbd92a854d1b8bec" path="/var/lib/kubelet/pods/ec96a596e22f5afedbd92a854d1b8bec/volumes"
	I0318 12:44:47.189536    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: I0318 12:43:32.628360    1523 kubelet.go:1872] "Trying to delete pod" pod="kube-system/etcd-multinode-642600" podUID="237133d7-6f1a-42ee-8cf2-a2d7564d67fc"
	I0318 12:44:47.189536    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: I0318 12:43:32.628590    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ecbdcbdad3fa79af8ef70896ae67d65b14c47b5811078c5d6d167e0f295d1bc"
	I0318 12:44:47.189648    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: E0318 12:43:32.836390    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 12:44:47.189785    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: E0318 12:43:32.836523    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume podName:7bc52797-b4bd-4046-b3d5-fae9c8ccd13b nodeName:}" failed. No retries permitted until 2024-03-18 12:43:34.836498609 +0000 UTC m=+10.847738345 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume") pod "coredns-5dd5756b68-fgn7v" (UID: "7bc52797-b4bd-4046-b3d5-fae9c8ccd13b") : object "kube-system"/"coredns" not registered
	I0318 12:44:47.189815    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: E0318 12:43:32.937295    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:47.189815    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: E0318 12:43:32.937349    1523 projected.go:198] Error preparing data for projected volume kube-api-access-9g8n5 for pod default/busybox-5b5d89c9d6-48qkw: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:47.189815    5712 command_runner.go:130] > Mar 18 12:43:32 multinode-642600 kubelet[1523]: E0318 12:43:32.937443    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5 podName:45969c0e-ac43-459e-95c0-86f7b76947db nodeName:}" failed. No retries permitted until 2024-03-18 12:43:34.937423048 +0000 UTC m=+10.948662684 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-9g8n5" (UniqueName: "kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5") pod "busybox-5b5d89c9d6-48qkw" (UID: "45969c0e-ac43-459e-95c0-86f7b76947db") : object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:47.189815    5712 command_runner.go:130] > Mar 18 12:43:33 multinode-642600 kubelet[1523]: E0318 12:43:33.359564    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:47.189815    5712 command_runner.go:130] > Mar 18 12:43:33 multinode-642600 kubelet[1523]: E0318 12:43:33.359732    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:47.189815    5712 command_runner.go:130] > Mar 18 12:43:34 multinode-642600 kubelet[1523]: E0318 12:43:34.409996    1523 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0318 12:44:47.189815    5712 command_runner.go:130] > Mar 18 12:43:34 multinode-642600 kubelet[1523]: E0318 12:43:34.855132    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 12:44:47.189815    5712 command_runner.go:130] > Mar 18 12:43:34 multinode-642600 kubelet[1523]: E0318 12:43:34.855288    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume podName:7bc52797-b4bd-4046-b3d5-fae9c8ccd13b nodeName:}" failed. No retries permitted until 2024-03-18 12:43:38.85526758 +0000 UTC m=+14.866507216 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume") pod "coredns-5dd5756b68-fgn7v" (UID: "7bc52797-b4bd-4046-b3d5-fae9c8ccd13b") : object "kube-system"/"coredns" not registered
	I0318 12:44:47.189815    5712 command_runner.go:130] > Mar 18 12:43:34 multinode-642600 kubelet[1523]: E0318 12:43:34.955668    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:47.189815    5712 command_runner.go:130] > Mar 18 12:43:34 multinode-642600 kubelet[1523]: E0318 12:43:34.955718    1523 projected.go:198] Error preparing data for projected volume kube-api-access-9g8n5 for pod default/busybox-5b5d89c9d6-48qkw: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:47.189815    5712 command_runner.go:130] > Mar 18 12:43:34 multinode-642600 kubelet[1523]: E0318 12:43:34.955777    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5 podName:45969c0e-ac43-459e-95c0-86f7b76947db nodeName:}" failed. No retries permitted until 2024-03-18 12:43:38.955759519 +0000 UTC m=+14.966999155 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-9g8n5" (UniqueName: "kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5") pod "busybox-5b5d89c9d6-48qkw" (UID: "45969c0e-ac43-459e-95c0-86f7b76947db") : object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:47.189815    5712 command_runner.go:130] > Mar 18 12:43:35 multinode-642600 kubelet[1523]: E0318 12:43:35.360249    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:47.189815    5712 command_runner.go:130] > Mar 18 12:43:35 multinode-642600 kubelet[1523]: E0318 12:43:35.360337    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:47.189815    5712 command_runner.go:130] > Mar 18 12:43:37 multinode-642600 kubelet[1523]: E0318 12:43:37.360005    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:47.190363    5712 command_runner.go:130] > Mar 18 12:43:37 multinode-642600 kubelet[1523]: E0318 12:43:37.360005    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:47.190363    5712 command_runner.go:130] > Mar 18 12:43:38 multinode-642600 kubelet[1523]: E0318 12:43:38.890447    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 12:44:47.190363    5712 command_runner.go:130] > Mar 18 12:43:38 multinode-642600 kubelet[1523]: E0318 12:43:38.890642    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume podName:7bc52797-b4bd-4046-b3d5-fae9c8ccd13b nodeName:}" failed. No retries permitted until 2024-03-18 12:43:46.890560586 +0000 UTC m=+22.901800222 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume") pod "coredns-5dd5756b68-fgn7v" (UID: "7bc52797-b4bd-4046-b3d5-fae9c8ccd13b") : object "kube-system"/"coredns" not registered
	I0318 12:44:47.190363    5712 command_runner.go:130] > Mar 18 12:43:38 multinode-642600 kubelet[1523]: E0318 12:43:38.991640    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:47.190482    5712 command_runner.go:130] > Mar 18 12:43:38 multinode-642600 kubelet[1523]: E0318 12:43:38.991754    1523 projected.go:198] Error preparing data for projected volume kube-api-access-9g8n5 for pod default/busybox-5b5d89c9d6-48qkw: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:47.190559    5712 command_runner.go:130] > Mar 18 12:43:38 multinode-642600 kubelet[1523]: E0318 12:43:38.991856    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5 podName:45969c0e-ac43-459e-95c0-86f7b76947db nodeName:}" failed. No retries permitted until 2024-03-18 12:43:46.991836746 +0000 UTC m=+23.003076482 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-9g8n5" (UniqueName: "kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5") pod "busybox-5b5d89c9d6-48qkw" (UID: "45969c0e-ac43-459e-95c0-86f7b76947db") : object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:47.190559    5712 command_runner.go:130] > Mar 18 12:43:39 multinode-642600 kubelet[1523]: E0318 12:43:39.360236    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:47.190559    5712 command_runner.go:130] > Mar 18 12:43:39 multinode-642600 kubelet[1523]: E0318 12:43:39.360508    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:47.190559    5712 command_runner.go:130] > Mar 18 12:43:39 multinode-642600 kubelet[1523]: E0318 12:43:39.425235    1523 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0318 12:44:47.190559    5712 command_runner.go:130] > Mar 18 12:43:41 multinode-642600 kubelet[1523]: E0318 12:43:41.360362    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:47.190559    5712 command_runner.go:130] > Mar 18 12:43:41 multinode-642600 kubelet[1523]: E0318 12:43:41.360863    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:47.190559    5712 command_runner.go:130] > Mar 18 12:43:43 multinode-642600 kubelet[1523]: E0318 12:43:43.359722    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:47.190559    5712 command_runner.go:130] > Mar 18 12:43:43 multinode-642600 kubelet[1523]: E0318 12:43:43.360308    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:47.190559    5712 command_runner.go:130] > Mar 18 12:43:44 multinode-642600 kubelet[1523]: E0318 12:43:44.438590    1523 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0318 12:44:47.190559    5712 command_runner.go:130] > Mar 18 12:43:45 multinode-642600 kubelet[1523]: E0318 12:43:45.360026    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:47.190559    5712 command_runner.go:130] > Mar 18 12:43:45 multinode-642600 kubelet[1523]: E0318 12:43:45.360101    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:47.190559    5712 command_runner.go:130] > Mar 18 12:43:46 multinode-642600 kubelet[1523]: E0318 12:43:46.970368    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 12:44:47.190559    5712 command_runner.go:130] > Mar 18 12:43:46 multinode-642600 kubelet[1523]: E0318 12:43:46.970583    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume podName:7bc52797-b4bd-4046-b3d5-fae9c8ccd13b nodeName:}" failed. No retries permitted until 2024-03-18 12:44:02.970562522 +0000 UTC m=+38.981802258 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume") pod "coredns-5dd5756b68-fgn7v" (UID: "7bc52797-b4bd-4046-b3d5-fae9c8ccd13b") : object "kube-system"/"coredns" not registered
	I0318 12:44:47.190559    5712 command_runner.go:130] > Mar 18 12:43:47 multinode-642600 kubelet[1523]: E0318 12:43:47.071352    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:47.190559    5712 command_runner.go:130] > Mar 18 12:43:47 multinode-642600 kubelet[1523]: E0318 12:43:47.071390    1523 projected.go:198] Error preparing data for projected volume kube-api-access-9g8n5 for pod default/busybox-5b5d89c9d6-48qkw: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:47.191139    5712 command_runner.go:130] > Mar 18 12:43:47 multinode-642600 kubelet[1523]: E0318 12:43:47.071448    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5 podName:45969c0e-ac43-459e-95c0-86f7b76947db nodeName:}" failed. No retries permitted until 2024-03-18 12:44:03.071430219 +0000 UTC m=+39.082669855 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-9g8n5" (UniqueName: "kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5") pod "busybox-5b5d89c9d6-48qkw" (UID: "45969c0e-ac43-459e-95c0-86f7b76947db") : object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:47.191139    5712 command_runner.go:130] > Mar 18 12:43:47 multinode-642600 kubelet[1523]: E0318 12:43:47.359847    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:47.191235    5712 command_runner.go:130] > Mar 18 12:43:47 multinode-642600 kubelet[1523]: E0318 12:43:47.360318    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:47.191281    5712 command_runner.go:130] > Mar 18 12:43:49 multinode-642600 kubelet[1523]: E0318 12:43:49.360074    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:47.191318    5712 command_runner.go:130] > Mar 18 12:43:49 multinode-642600 kubelet[1523]: E0318 12:43:49.360604    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:47.191351    5712 command_runner.go:130] > Mar 18 12:43:49 multinode-642600 kubelet[1523]: E0318 12:43:49.453099    1523 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0318 12:44:47.191381    5712 command_runner.go:130] > Mar 18 12:43:51 multinode-642600 kubelet[1523]: E0318 12:43:51.360369    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:47.191381    5712 command_runner.go:130] > Mar 18 12:43:51 multinode-642600 kubelet[1523]: E0318 12:43:51.361016    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:47.191381    5712 command_runner.go:130] > Mar 18 12:43:53 multinode-642600 kubelet[1523]: E0318 12:43:53.359799    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:47.191381    5712 command_runner.go:130] > Mar 18 12:43:53 multinode-642600 kubelet[1523]: E0318 12:43:53.359935    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:47.191381    5712 command_runner.go:130] > Mar 18 12:43:54 multinode-642600 kubelet[1523]: E0318 12:43:54.467487    1523 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0318 12:44:47.191381    5712 command_runner.go:130] > Mar 18 12:43:55 multinode-642600 kubelet[1523]: E0318 12:43:55.359513    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:47.191381    5712 command_runner.go:130] > Mar 18 12:43:55 multinode-642600 kubelet[1523]: E0318 12:43:55.360047    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:47.191381    5712 command_runner.go:130] > Mar 18 12:43:57 multinode-642600 kubelet[1523]: E0318 12:43:57.359796    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:47.191381    5712 command_runner.go:130] > Mar 18 12:43:57 multinode-642600 kubelet[1523]: E0318 12:43:57.359970    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:47.191381    5712 command_runner.go:130] > Mar 18 12:43:59 multinode-642600 kubelet[1523]: E0318 12:43:59.360327    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:47.191918    5712 command_runner.go:130] > Mar 18 12:43:59 multinode-642600 kubelet[1523]: E0318 12:43:59.360455    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:47.191918    5712 command_runner.go:130] > Mar 18 12:43:59 multinode-642600 kubelet[1523]: E0318 12:43:59.483297    1523 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0318 12:44:47.192010    5712 command_runner.go:130] > Mar 18 12:44:01 multinode-642600 kubelet[1523]: E0318 12:44:01.359691    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:47.192010    5712 command_runner.go:130] > Mar 18 12:44:01 multinode-642600 kubelet[1523]: E0318 12:44:01.360228    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:47.192125    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.032626    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 12:44:47.192125    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.032722    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume podName:7bc52797-b4bd-4046-b3d5-fae9c8ccd13b nodeName:}" failed. No retries permitted until 2024-03-18 12:44:35.0327033 +0000 UTC m=+71.043942936 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume") pod "coredns-5dd5756b68-fgn7v" (UID: "7bc52797-b4bd-4046-b3d5-fae9c8ccd13b") : object "kube-system"/"coredns" not registered
	I0318 12:44:47.192125    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.134727    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:47.192125    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.134857    1523 projected.go:198] Error preparing data for projected volume kube-api-access-9g8n5 for pod default/busybox-5b5d89c9d6-48qkw: object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:47.192125    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.135073    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5 podName:45969c0e-ac43-459e-95c0-86f7b76947db nodeName:}" failed. No retries permitted until 2024-03-18 12:44:35.13505028 +0000 UTC m=+71.146289916 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-9g8n5" (UniqueName: "kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5") pod "busybox-5b5d89c9d6-48qkw" (UID: "45969c0e-ac43-459e-95c0-86f7b76947db") : object "default"/"kube-root-ca.crt" not registered
	I0318 12:44:47.192125    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.360260    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	I0318 12:44:47.192125    5712 command_runner.go:130] > Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.360354    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	I0318 12:44:47.192125    5712 command_runner.go:130] > Mar 18 12:44:04 multinode-642600 kubelet[1523]: I0318 12:44:04.124509    1523 scope.go:117] "RemoveContainer" containerID="996fb0f2ade69129acd747fc5146ef4295cc7ebd79cae8e8f881a21393ddb74a"
	I0318 12:44:47.192125    5712 command_runner.go:130] > Mar 18 12:44:04 multinode-642600 kubelet[1523]: I0318 12:44:04.125880    1523 scope.go:117] "RemoveContainer" containerID="787ade2ea2cd019bbcde5523b750659f59b0187d9de4b757fba8a14f9a126460"
	I0318 12:44:47.192125    5712 command_runner.go:130] > Mar 18 12:44:04 multinode-642600 kubelet[1523]: E0318 12:44:04.127355    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d2718b8a-26a9-4c86-bf9a-221d1ee23ceb)\"" pod="kube-system/storage-provisioner" podUID="d2718b8a-26a9-4c86-bf9a-221d1ee23ceb"
	I0318 12:44:47.192125    5712 command_runner.go:130] > Mar 18 12:44:17 multinode-642600 kubelet[1523]: I0318 12:44:17.359956    1523 scope.go:117] "RemoveContainer" containerID="787ade2ea2cd019bbcde5523b750659f59b0187d9de4b757fba8a14f9a126460"
	I0318 12:44:47.192125    5712 command_runner.go:130] > Mar 18 12:44:24 multinode-642600 kubelet[1523]: I0318 12:44:24.325657    1523 scope.go:117] "RemoveContainer" containerID="301c80f8b38cb79f051755af6af0fb604c0eee0689fd1f2d22a66e0969a9583f"
	I0318 12:44:47.192125    5712 command_runner.go:130] > Mar 18 12:44:24 multinode-642600 kubelet[1523]: I0318 12:44:24.374630    1523 scope.go:117] "RemoveContainer" containerID="4b94d396876e5c7e3b8c69b01560d10ad95ff183ab3cc78a194276537cfd6cf5"
	I0318 12:44:47.192125    5712 command_runner.go:130] > Mar 18 12:44:24 multinode-642600 kubelet[1523]: E0318 12:44:24.399375    1523 iptables.go:575] "Could not set up iptables canary" err=<
	I0318 12:44:47.192125    5712 command_runner.go:130] > Mar 18 12:44:24 multinode-642600 kubelet[1523]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0318 12:44:47.192125    5712 command_runner.go:130] > Mar 18 12:44:24 multinode-642600 kubelet[1523]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0318 12:44:47.192125    5712 command_runner.go:130] > Mar 18 12:44:24 multinode-642600 kubelet[1523]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0318 12:44:47.192125    5712 command_runner.go:130] > Mar 18 12:44:24 multinode-642600 kubelet[1523]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0318 12:44:47.192659    5712 command_runner.go:130] > Mar 18 12:44:35 multinode-642600 kubelet[1523]: I0318 12:44:35.962288    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1b2432b0ed66a1175586c13232eb9b9239f18a4f9a86e2a0c5f48c1407fdb14"
	I0318 12:44:47.192659    5712 command_runner.go:130] > Mar 18 12:44:36 multinode-642600 kubelet[1523]: I0318 12:44:36.079817    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1090dd57409807a15613607fd810b67863a9dd9c5a8512d7a6720906641c7f26"
	I0318 12:44:47.239763    5712 logs.go:123] Gathering logs for dmesg ...
	I0318 12:44:47.239763    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 12:44:47.271820    5712 command_runner.go:130] > [Mar18 12:41] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0318 12:44:47.271874    5712 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0318 12:44:47.271874    5712 command_runner.go:130] > [  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0318 12:44:47.271874    5712 command_runner.go:130] > [  +0.129398] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0318 12:44:47.271874    5712 command_runner.go:130] > [  +0.023142] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0318 12:44:47.271874    5712 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0318 12:44:47.271874    5712 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0318 12:44:47.271874    5712 command_runner.go:130] > [  +0.067111] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0318 12:44:47.271874    5712 command_runner.go:130] > [  +0.023049] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0318 12:44:47.271874    5712 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0318 12:44:47.271874    5712 command_runner.go:130] > [  +5.633479] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0318 12:44:47.271874    5712 command_runner.go:130] > [  +0.746575] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0318 12:44:47.271874    5712 command_runner.go:130] > [  +1.948336] systemd-fstab-generator[113]: Ignoring "noauto" option for root device
	I0318 12:44:47.271874    5712 command_runner.go:130] > [  +7.356358] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0318 12:44:47.271874    5712 command_runner.go:130] > [  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0318 12:44:47.271874    5712 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0318 12:44:47.271874    5712 command_runner.go:130] > [Mar18 12:42] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	I0318 12:44:47.271874    5712 command_runner.go:130] > [  +0.196447] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	I0318 12:44:47.271874    5712 command_runner.go:130] > [Mar18 12:43] systemd-fstab-generator[969]: Ignoring "noauto" option for root device
	I0318 12:44:47.271874    5712 command_runner.go:130] > [  +0.116812] kauditd_printk_skb: 73 callbacks suppressed
	I0318 12:44:47.271874    5712 command_runner.go:130] > [  +0.565179] systemd-fstab-generator[1008]: Ignoring "noauto" option for root device
	I0318 12:44:47.271874    5712 command_runner.go:130] > [  +0.224131] systemd-fstab-generator[1020]: Ignoring "noauto" option for root device
	I0318 12:44:47.271874    5712 command_runner.go:130] > [  +0.243543] systemd-fstab-generator[1034]: Ignoring "noauto" option for root device
	I0318 12:44:47.271874    5712 command_runner.go:130] > [  +2.986318] systemd-fstab-generator[1219]: Ignoring "noauto" option for root device
	I0318 12:44:47.271874    5712 command_runner.go:130] > [  +0.197212] systemd-fstab-generator[1231]: Ignoring "noauto" option for root device
	I0318 12:44:47.271874    5712 command_runner.go:130] > [  +0.228503] systemd-fstab-generator[1243]: Ignoring "noauto" option for root device
	I0318 12:44:47.272424    5712 command_runner.go:130] > [  +0.297734] systemd-fstab-generator[1258]: Ignoring "noauto" option for root device
	I0318 12:44:47.272424    5712 command_runner.go:130] > [  +0.969011] systemd-fstab-generator[1381]: Ignoring "noauto" option for root device
	I0318 12:44:47.272424    5712 command_runner.go:130] > [  +0.114690] kauditd_printk_skb: 205 callbacks suppressed
	I0318 12:44:47.272424    5712 command_runner.go:130] > [  +3.575437] systemd-fstab-generator[1516]: Ignoring "noauto" option for root device
	I0318 12:44:47.272424    5712 command_runner.go:130] > [  +1.537938] kauditd_printk_skb: 44 callbacks suppressed
	I0318 12:44:47.272492    5712 command_runner.go:130] > [  +6.654182] kauditd_printk_skb: 30 callbacks suppressed
	I0318 12:44:47.272492    5712 command_runner.go:130] > [  +4.384606] systemd-fstab-generator[2563]: Ignoring "noauto" option for root device
	I0318 12:44:47.272492    5712 command_runner.go:130] > [  +7.200668] kauditd_printk_skb: 70 callbacks suppressed
	I0318 12:44:47.274746    5712 logs.go:123] Gathering logs for kube-controller-manager [14ae9398d33b] ...
	I0318 12:44:47.274818    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ae9398d33b"
	I0318 12:44:47.313572    5712 command_runner.go:130] ! I0318 12:43:27.406049       1 serving.go:348] Generated self-signed cert in-memory
	I0318 12:44:47.313572    5712 command_runner.go:130] ! I0318 12:43:29.733819       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0318 12:44:47.313572    5712 command_runner.go:130] ! I0318 12:43:29.734137       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:47.313658    5712 command_runner.go:130] ! I0318 12:43:29.737351       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 12:44:47.313658    5712 command_runner.go:130] ! I0318 12:43:29.737598       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 12:44:47.313751    5712 command_runner.go:130] ! I0318 12:43:29.739365       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0318 12:44:47.313751    5712 command_runner.go:130] ! I0318 12:43:29.740428       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 12:44:47.313751    5712 command_runner.go:130] ! I0318 12:43:32.581261       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0318 12:44:47.313751    5712 command_runner.go:130] ! I0318 12:43:32.597867       1 controllermanager.go:642] "Started controller" controller="persistentvolume-binder-controller"
	I0318 12:44:47.313816    5712 command_runner.go:130] ! I0318 12:43:32.602078       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0318 12:44:47.313816    5712 command_runner.go:130] ! I0318 12:43:32.602099       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0318 12:44:47.313816    5712 command_runner.go:130] ! I0318 12:43:32.605600       1 controllermanager.go:642] "Started controller" controller="persistentvolume-expander-controller"
	I0318 12:44:47.313816    5712 command_runner.go:130] ! I0318 12:43:32.605807       1 expand_controller.go:328] "Starting expand controller"
	I0318 12:44:47.313888    5712 command_runner.go:130] ! I0318 12:43:32.605957       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0318 12:44:47.313888    5712 command_runner.go:130] ! I0318 12:43:32.620725       1 controllermanager.go:642] "Started controller" controller="persistentvolume-protection-controller"
	I0318 12:44:47.313888    5712 command_runner.go:130] ! I0318 12:43:32.621286       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0318 12:44:47.313888    5712 command_runner.go:130] ! I0318 12:43:32.621374       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0318 12:44:47.313888    5712 command_runner.go:130] ! I0318 12:43:32.663010       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I0318 12:44:47.313946    5712 command_runner.go:130] ! I0318 12:43:32.663383       1 namespace_controller.go:197] "Starting namespace controller"
	I0318 12:44:47.313946    5712 command_runner.go:130] ! I0318 12:43:32.663451       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0318 12:44:47.313946    5712 command_runner.go:130] ! I0318 12:43:32.674431       1 controllermanager.go:642] "Started controller" controller="deployment-controller"
	I0318 12:44:47.313946    5712 command_runner.go:130] ! I0318 12:43:32.675030       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0318 12:44:47.313946    5712 command_runner.go:130] ! I0318 12:43:32.675045       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0318 12:44:47.314010    5712 command_runner.go:130] ! I0318 12:43:32.680220       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0318 12:44:47.314010    5712 command_runner.go:130] ! I0318 12:43:32.680236       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0318 12:44:47.314010    5712 command_runner.go:130] ! I0318 12:43:32.680266       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:47.314079    5712 command_runner.go:130] ! I0318 12:43:32.681919       1 shared_informer.go:318] Caches are synced for tokens
	I0318 12:44:47.314079    5712 command_runner.go:130] ! I0318 12:43:32.684132       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0318 12:44:47.314079    5712 command_runner.go:130] ! I0318 12:43:32.684147       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0318 12:44:47.314460    5712 command_runner.go:130] ! I0318 12:43:32.684164       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:47.314460    5712 command_runner.go:130] ! I0318 12:43:32.685811       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0318 12:44:47.314460    5712 command_runner.go:130] ! I0318 12:43:32.685845       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0318 12:44:47.314523    5712 command_runner.go:130] ! I0318 12:43:32.686123       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:47.314523    5712 command_runner.go:130] ! I0318 12:43:32.687526       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0318 12:44:47.314588    5712 command_runner.go:130] ! I0318 12:43:32.687845       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0318 12:44:47.314653    5712 command_runner.go:130] ! I0318 12:43:32.687858       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0318 12:44:47.314653    5712 command_runner.go:130] ! I0318 12:43:32.687918       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:47.314653    5712 command_runner.go:130] ! I0318 12:43:32.691958       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0318 12:44:47.314653    5712 command_runner.go:130] ! I0318 12:43:32.692673       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0318 12:44:47.314653    5712 command_runner.go:130] ! I0318 12:43:32.696192       1 controllermanager.go:642] "Started controller" controller="bootstrap-signer-controller"
	I0318 12:44:47.314653    5712 command_runner.go:130] ! I0318 12:43:32.696622       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0318 12:44:47.314653    5712 command_runner.go:130] ! I0318 12:43:32.701031       1 controllermanager.go:642] "Started controller" controller="token-cleaner-controller"
	I0318 12:44:47.314653    5712 command_runner.go:130] ! I0318 12:43:32.701415       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0318 12:44:47.314653    5712 command_runner.go:130] ! I0318 12:43:32.701449       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0318 12:44:47.314653    5712 command_runner.go:130] ! I0318 12:43:32.701458       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0318 12:44:47.314843    5712 command_runner.go:130] ! E0318 12:43:32.705162       1 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0318 12:44:47.314843    5712 command_runner.go:130] ! I0318 12:43:32.705349       1 controllermanager.go:620] "Warning: skipping controller" controller="service-lb-controller"
	I0318 12:44:47.314907    5712 command_runner.go:130] ! I0318 12:43:32.705364       1 core.go:228] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0318 12:44:47.314907    5712 command_runner.go:130] ! I0318 12:43:32.705376       1 controllermanager.go:620] "Warning: skipping controller" controller="node-route-controller"
	I0318 12:44:47.314907    5712 command_runner.go:130] ! I0318 12:43:32.750736       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0318 12:44:47.314965    5712 command_runner.go:130] ! I0318 12:43:32.751361       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0318 12:44:47.314965    5712 command_runner.go:130] ! W0318 12:43:32.751515       1 shared_informer.go:593] resyncPeriod 19h34m1.540802039s is smaller than resyncCheckPeriod 20h12m46.622656472s and the informer has already started. Changing it to 20h12m46.622656472s
	I0318 12:44:47.315004    5712 command_runner.go:130] ! I0318 12:43:32.752012       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0318 12:44:47.315050    5712 command_runner.go:130] ! I0318 12:43:32.752529       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0318 12:44:47.315050    5712 command_runner.go:130] ! I0318 12:43:32.752719       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0318 12:44:47.315050    5712 command_runner.go:130] ! I0318 12:43:32.752884       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0318 12:44:47.315116    5712 command_runner.go:130] ! I0318 12:43:32.753191       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0318 12:44:47.315116    5712 command_runner.go:130] ! I0318 12:43:32.753284       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0318 12:44:47.315116    5712 command_runner.go:130] ! I0318 12:43:32.753677       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0318 12:44:47.315116    5712 command_runner.go:130] ! I0318 12:43:32.753791       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0318 12:44:47.315193    5712 command_runner.go:130] ! I0318 12:43:32.753884       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0318 12:44:47.315193    5712 command_runner.go:130] ! I0318 12:43:32.754036       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0318 12:44:47.315193    5712 command_runner.go:130] ! I0318 12:43:32.754202       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0318 12:44:47.315193    5712 command_runner.go:130] ! I0318 12:43:32.754691       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0318 12:44:47.315258    5712 command_runner.go:130] ! I0318 12:43:32.755001       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0318 12:44:47.315258    5712 command_runner.go:130] ! I0318 12:43:32.755205       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0318 12:44:47.315340    5712 command_runner.go:130] ! I0318 12:43:32.755784       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0318 12:44:47.315340    5712 command_runner.go:130] ! I0318 12:43:32.755974       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0318 12:44:47.315399    5712 command_runner.go:130] ! I0318 12:43:32.756144       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0318 12:44:47.315399    5712 command_runner.go:130] ! I0318 12:43:32.756649       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0318 12:44:47.315399    5712 command_runner.go:130] ! I0318 12:43:32.756826       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0318 12:44:47.315399    5712 command_runner.go:130] ! I0318 12:43:32.757119       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 12:44:47.315465    5712 command_runner.go:130] ! I0318 12:43:32.757364       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0318 12:44:47.315465    5712 command_runner.go:130] ! I0318 12:43:32.757580       1 controllermanager.go:642] "Started controller" controller="resourcequota-controller"
	I0318 12:44:47.315524    5712 command_runner.go:130] ! E0318 12:43:32.773718       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0318 12:44:47.315524    5712 command_runner.go:130] ! I0318 12:43:32.773746       1 controllermanager.go:620] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0318 12:44:47.315524    5712 command_runner.go:130] ! I0318 12:43:32.786590       1 controllermanager.go:642] "Started controller" controller="ephemeral-volume-controller"
	I0318 12:44:47.315588    5712 command_runner.go:130] ! I0318 12:43:32.786978       1 controller.go:169] "Starting ephemeral volume controller"
	I0318 12:44:47.315588    5712 command_runner.go:130] ! I0318 12:43:32.787007       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0318 12:44:47.315588    5712 command_runner.go:130] ! I0318 12:43:32.795770       1 controllermanager.go:642] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0318 12:44:47.315588    5712 command_runner.go:130] ! I0318 12:43:32.798452       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0318 12:44:47.315588    5712 command_runner.go:130] ! I0318 12:43:32.798585       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0318 12:44:47.315653    5712 command_runner.go:130] ! I0318 12:43:32.801712       1 controllermanager.go:642] "Started controller" controller="serviceaccount-controller"
	I0318 12:44:47.315653    5712 command_runner.go:130] ! I0318 12:43:32.802261       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0318 12:44:47.315653    5712 command_runner.go:130] ! I0318 12:43:32.806063       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0318 12:44:47.315719    5712 command_runner.go:130] ! I0318 12:43:32.823560       1 controllermanager.go:642] "Started controller" controller="garbage-collector-controller"
	I0318 12:44:47.315719    5712 command_runner.go:130] ! I0318 12:43:32.823578       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0318 12:44:47.315719    5712 command_runner.go:130] ! I0318 12:43:32.823595       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 12:44:47.315719    5712 command_runner.go:130] ! I0318 12:43:32.823621       1 graph_builder.go:294] "Running" component="GraphBuilder"
	I0318 12:44:47.315785    5712 command_runner.go:130] ! I0318 12:43:32.833033       1 controllermanager.go:642] "Started controller" controller="daemonset-controller"
	I0318 12:44:47.315785    5712 command_runner.go:130] ! I0318 12:43:32.833480       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0318 12:44:47.315785    5712 command_runner.go:130] ! I0318 12:43:32.833494       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0318 12:44:47.315871    5712 command_runner.go:130] ! I0318 12:43:32.862160       1 node_lifecycle_controller.go:431] "Controller will reconcile labels"
	I0318 12:44:47.315871    5712 command_runner.go:130] ! I0318 12:43:32.862209       1 controllermanager.go:642] "Started controller" controller="node-lifecycle-controller"
	I0318 12:44:47.315871    5712 command_runner.go:130] ! I0318 12:43:32.862524       1 node_lifecycle_controller.go:465] "Sending events to api server"
	I0318 12:44:47.315871    5712 command_runner.go:130] ! I0318 12:43:32.862562       1 node_lifecycle_controller.go:476] "Starting node controller"
	I0318 12:44:47.315871    5712 command_runner.go:130] ! I0318 12:43:32.862573       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0318 12:44:47.315925    5712 command_runner.go:130] ! I0318 12:43:32.883369       1 controllermanager.go:642] "Started controller" controller="endpointslice-mirroring-controller"
	I0318 12:44:47.315925    5712 command_runner.go:130] ! I0318 12:43:32.886141       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0318 12:44:47.315977    5712 command_runner.go:130] ! I0318 12:43:32.886674       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0318 12:44:47.315977    5712 command_runner.go:130] ! I0318 12:43:32.896468       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I0318 12:44:47.315977    5712 command_runner.go:130] ! I0318 12:43:32.896951       1 stateful_set.go:161] "Starting stateful set controller"
	I0318 12:44:47.316049    5712 command_runner.go:130] ! I0318 12:43:32.897135       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0318 12:44:47.316049    5712 command_runner.go:130] ! I0318 12:43:32.900325       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0318 12:44:47.316049    5712 command_runner.go:130] ! I0318 12:43:32.900580       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0318 12:44:47.316104    5712 command_runner.go:130] ! I0318 12:43:32.903531       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0318 12:44:47.316104    5712 command_runner.go:130] ! I0318 12:43:32.917793       1 controllermanager.go:642] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0318 12:44:47.316104    5712 command_runner.go:130] ! I0318 12:43:32.918152       1 horizontal.go:200] "Starting HPA controller"
	I0318 12:44:47.316104    5712 command_runner.go:130] ! I0318 12:43:32.918638       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0318 12:44:47.316171    5712 command_runner.go:130] ! I0318 12:43:32.920489       1 controllermanager.go:642] "Started controller" controller="pod-garbage-collector-controller"
	I0318 12:44:47.316171    5712 command_runner.go:130] ! I0318 12:43:32.920802       1 gc_controller.go:101] "Starting GC controller"
	I0318 12:44:47.316171    5712 command_runner.go:130] ! I0318 12:43:32.922940       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0318 12:44:47.316171    5712 command_runner.go:130] ! I0318 12:43:32.923834       1 controllermanager.go:642] "Started controller" controller="endpointslice-controller"
	I0318 12:44:47.316235    5712 command_runner.go:130] ! I0318 12:43:32.924143       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0318 12:44:47.316235    5712 command_runner.go:130] ! I0318 12:43:32.924461       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0318 12:44:47.316235    5712 command_runner.go:130] ! I0318 12:43:32.935394       1 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
	I0318 12:44:47.316235    5712 command_runner.go:130] ! I0318 12:43:32.935610       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0318 12:44:47.316303    5712 command_runner.go:130] ! I0318 12:43:32.935623       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0318 12:44:47.316303    5712 command_runner.go:130] ! I0318 12:43:32.996434       1 controllermanager.go:642] "Started controller" controller="job-controller"
	I0318 12:44:47.316303    5712 command_runner.go:130] ! I0318 12:43:32.996586       1 job_controller.go:226] "Starting job controller"
	I0318 12:44:47.316303    5712 command_runner.go:130] ! I0318 12:43:32.996666       1 shared_informer.go:311] Waiting for caches to sync for job
	I0318 12:44:47.316370    5712 command_runner.go:130] ! I0318 12:43:33.085354       1 controllermanager.go:642] "Started controller" controller="disruption-controller"
	I0318 12:44:47.316370    5712 command_runner.go:130] ! I0318 12:43:33.086157       1 disruption.go:433] "Sending events to api server."
	I0318 12:44:47.316370    5712 command_runner.go:130] ! I0318 12:43:33.086235       1 disruption.go:444] "Starting disruption controller"
	I0318 12:44:47.316370    5712 command_runner.go:130] ! I0318 12:43:33.086245       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0318 12:44:47.316370    5712 command_runner.go:130] ! I0318 12:43:33.141477       1 controllermanager.go:642] "Started controller" controller="cronjob-controller"
	I0318 12:44:47.316370    5712 command_runner.go:130] ! I0318 12:43:33.142359       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0318 12:44:47.316467    5712 command_runner.go:130] ! I0318 12:43:33.142566       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0318 12:44:47.316467    5712 command_runner.go:130] ! I0318 12:43:33.186973       1 controllermanager.go:642] "Started controller" controller="endpoints-controller"
	I0318 12:44:47.316467    5712 command_runner.go:130] ! I0318 12:43:33.187335       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0318 12:44:47.316467    5712 command_runner.go:130] ! I0318 12:43:33.187410       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0318 12:44:47.316534    5712 command_runner.go:130] ! I0318 12:43:33.236517       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0318 12:44:47.316575    5712 command_runner.go:130] ! I0318 12:43:33.236982       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:33.237471       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:33.286539       1 controllermanager.go:642] "Started controller" controller="ttl-controller"
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:33.287154       1 ttl_controller.go:124] "Starting TTL controller"
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:33.287375       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.355688       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.355845       1 controllermanager.go:642] "Started controller" controller="node-ipam-controller"
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.356879       1 node_ipam_controller.go:162] "Starting ipam controller"
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.357033       1 shared_informer.go:311] Waiting for caches to sync for node
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.359716       1 controllermanager.go:642] "Started controller" controller="clusterrole-aggregation-controller"
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.361043       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.361062       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.364706       1 controllermanager.go:642] "Started controller" controller="ttl-after-finished-controller"
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.364861       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.364989       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.369492       1 controllermanager.go:642] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.369675       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.369706       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.375944       1 controllermanager.go:642] "Started controller" controller="replicaset-controller"
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.376145       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.377600       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.390058       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.405940       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600\" does not exist"
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.408115       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.408433       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.408623       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600-m02\" does not exist"
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.408708       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600-m03\" does not exist"
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.408817       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.421506       1 shared_informer.go:318] Caches are synced for PV protection
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.446678       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.459596       1 shared_informer.go:318] Caches are synced for node
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.459833       1 range_allocator.go:174] "Sending events to api server"
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.460258       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.460829       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.461091       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0318 12:44:47.316623    5712 command_runner.go:130] ! I0318 12:43:43.461418       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0318 12:44:47.317249    5712 command_runner.go:130] ! I0318 12:43:43.463618       1 shared_informer.go:318] Caches are synced for namespace
	I0318 12:44:47.317249    5712 command_runner.go:130] ! I0318 12:43:43.466097       1 shared_informer.go:318] Caches are synced for taint
	I0318 12:44:47.317249    5712 command_runner.go:130] ! I0318 12:43:43.466427       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0318 12:44:47.317249    5712 command_runner.go:130] ! I0318 12:43:43.466639       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0318 12:44:47.317249    5712 command_runner.go:130] ! I0318 12:43:43.466863       1 taint_manager.go:210] "Sending events to api server"
	I0318 12:44:47.317249    5712 command_runner.go:130] ! I0318 12:43:43.468821       1 event.go:307] "Event occurred" object="multinode-642600" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600 event: Registered Node multinode-642600 in Controller"
	I0318 12:44:47.317249    5712 command_runner.go:130] ! I0318 12:43:43.469328       1 event.go:307] "Event occurred" object="multinode-642600-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600-m02 event: Registered Node multinode-642600-m02 in Controller"
	I0318 12:44:47.317249    5712 command_runner.go:130] ! I0318 12:43:43.469579       1 event.go:307] "Event occurred" object="multinode-642600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600-m03 event: Registered Node multinode-642600-m03 in Controller"
	I0318 12:44:47.317249    5712 command_runner.go:130] ! I0318 12:43:43.469959       1 shared_informer.go:318] Caches are synced for crt configmap
	I0318 12:44:47.317249    5712 command_runner.go:130] ! I0318 12:43:43.477268       1 shared_informer.go:318] Caches are synced for deployment
	I0318 12:44:47.317249    5712 command_runner.go:130] ! I0318 12:43:43.486297       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0318 12:44:47.317249    5712 command_runner.go:130] ! I0318 12:43:43.487082       1 shared_informer.go:318] Caches are synced for ephemeral
	I0318 12:44:47.317249    5712 command_runner.go:130] ! I0318 12:43:43.487171       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0318 12:44:47.317249    5712 command_runner.go:130] ! I0318 12:43:43.487768       1 shared_informer.go:318] Caches are synced for TTL
	I0318 12:44:47.317799    5712 command_runner.go:130] ! I0318 12:43:43.487848       1 shared_informer.go:318] Caches are synced for endpoint
	I0318 12:44:47.317799    5712 command_runner.go:130] ! I0318 12:43:43.489265       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0318 12:44:47.317799    5712 command_runner.go:130] ! I0318 12:43:43.497682       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0318 12:44:47.317799    5712 command_runner.go:130] ! I0318 12:43:43.498610       1 shared_informer.go:318] Caches are synced for stateful set
	I0318 12:44:47.317799    5712 command_runner.go:130] ! I0318 12:43:43.498725       1 shared_informer.go:318] Caches are synced for attach detach
	I0318 12:44:47.317799    5712 command_runner.go:130] ! I0318 12:43:43.501123       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-642600"
	I0318 12:44:47.317799    5712 command_runner.go:130] ! I0318 12:43:43.503362       1 shared_informer.go:318] Caches are synced for persistent volume
	I0318 12:44:47.317799    5712 command_runner.go:130] ! I0318 12:43:43.505991       1 shared_informer.go:318] Caches are synced for expand
	I0318 12:44:47.317799    5712 command_runner.go:130] ! I0318 12:43:43.503938       1 shared_informer.go:318] Caches are synced for PVC protection
	I0318 12:44:47.317799    5712 command_runner.go:130] ! I0318 12:43:43.506104       1 shared_informer.go:318] Caches are synced for service account
	I0318 12:44:47.317799    5712 command_runner.go:130] ! I0318 12:43:43.505782       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-642600-m02"
	I0318 12:44:47.317799    5712 command_runner.go:130] ! I0318 12:43:43.505818       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-642600-m03"
	I0318 12:44:47.318000    5712 command_runner.go:130] ! I0318 12:43:43.506356       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0318 12:44:47.318000    5712 command_runner.go:130] ! I0318 12:43:43.521010       1 shared_informer.go:318] Caches are synced for HPA
	I0318 12:44:47.318000    5712 command_runner.go:130] ! I0318 12:43:43.524230       1 shared_informer.go:318] Caches are synced for GC
	I0318 12:44:47.318000    5712 command_runner.go:130] ! I0318 12:43:43.527081       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0318 12:44:47.318000    5712 command_runner.go:130] ! I0318 12:43:43.534422       1 shared_informer.go:318] Caches are synced for daemon sets
	I0318 12:44:47.318000    5712 command_runner.go:130] ! I0318 12:43:43.537721       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0318 12:44:47.318000    5712 command_runner.go:130] ! I0318 12:43:43.545260       1 shared_informer.go:318] Caches are synced for cronjob
	I0318 12:44:47.318000    5712 command_runner.go:130] ! I0318 12:43:43.546769       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="57.454588ms"
	I0318 12:44:47.318115    5712 command_runner.go:130] ! I0318 12:43:43.547853       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="57.476888ms"
	I0318 12:44:47.318115    5712 command_runner.go:130] ! I0318 12:43:43.552128       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="66µs"
	I0318 12:44:47.318115    5712 command_runner.go:130] ! I0318 12:43:43.552429       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="130.199µs"
	I0318 12:44:47.318115    5712 command_runner.go:130] ! I0318 12:43:43.565701       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0318 12:44:47.318115    5712 command_runner.go:130] ! I0318 12:43:43.580927       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0318 12:44:47.318115    5712 command_runner.go:130] ! I0318 12:43:43.585098       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0318 12:44:47.318115    5712 command_runner.go:130] ! I0318 12:43:43.586663       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0318 12:44:47.318233    5712 command_runner.go:130] ! I0318 12:43:43.590461       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 12:44:47.318233    5712 command_runner.go:130] ! I0318 12:43:43.597830       1 shared_informer.go:318] Caches are synced for job
	I0318 12:44:47.318233    5712 command_runner.go:130] ! I0318 12:43:43.635734       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0318 12:44:47.318233    5712 command_runner.go:130] ! I0318 12:43:43.658493       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 12:44:47.318233    5712 command_runner.go:130] ! I0318 12:43:43.686534       1 shared_informer.go:318] Caches are synced for disruption
	I0318 12:44:47.318233    5712 command_runner.go:130] ! I0318 12:43:44.024395       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 12:44:47.318233    5712 command_runner.go:130] ! I0318 12:43:44.024760       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0318 12:44:47.318233    5712 command_runner.go:130] ! I0318 12:43:44.048280       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 12:44:47.318233    5712 command_runner.go:130] ! I0318 12:44:11.303411       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:47.318381    5712 command_runner.go:130] ! I0318 12:44:13.533509       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-48qkw" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-48qkw"
	I0318 12:44:47.318381    5712 command_runner.go:130] ! I0318 12:44:13.534203       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68-fgn7v" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-5dd5756b68-fgn7v"
	I0318 12:44:47.318381    5712 command_runner.go:130] ! I0318 12:44:13.534478       1 event.go:307] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I0318 12:44:47.318381    5712 command_runner.go:130] ! I0318 12:44:23.562573       1 event.go:307] "Event occurred" object="multinode-642600-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-642600-m02 status is now: NodeNotReady"
	I0318 12:44:47.318486    5712 command_runner.go:130] ! I0318 12:44:23.591486       1 event.go:307] "Event occurred" object="kube-system/kindnet-d5llj" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:47.318486    5712 command_runner.go:130] ! I0318 12:44:23.614671       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-vts9f" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:47.318486    5712 command_runner.go:130] ! I0318 12:44:23.639496       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-hmhdf" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:47.318486    5712 command_runner.go:130] ! I0318 12:44:23.661949       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="21.740356ms"
	I0318 12:44:47.318486    5712 command_runner.go:130] ! I0318 12:44:23.663289       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="50.499µs"
	I0318 12:44:47.318486    5712 command_runner.go:130] ! I0318 12:44:37.149797       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="76.1µs"
	I0318 12:44:47.318609    5712 command_runner.go:130] ! I0318 12:44:37.209300       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="28.125704ms"
	I0318 12:44:47.318609    5712 command_runner.go:130] ! I0318 12:44:37.209415       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="54.4µs"
	I0318 12:44:47.318609    5712 command_runner.go:130] ! I0318 12:44:37.245284       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="23.227968ms"
	I0318 12:44:47.318609    5712 command_runner.go:130] ! I0318 12:44:37.254358       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="3.872028ms"
	I0318 12:44:47.333870    5712 logs.go:123] Gathering logs for container status ...
	I0318 12:44:47.333870    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 12:44:47.439689    5712 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0318 12:44:47.439806    5712 command_runner.go:130] > 566e40ce923f7       8c811b4aec35f                                                                                         11 seconds ago       Running             busybox                   1                   e1b2432b0ed66       busybox-5b5d89c9d6-48qkw
	I0318 12:44:47.439806    5712 command_runner.go:130] > fcf17db92b351       ead0a4a53df89                                                                                         12 seconds ago       Running             coredns                   1                   1090dd5740980       coredns-5dd5756b68-fgn7v
	I0318 12:44:47.439806    5712 command_runner.go:130] > 4652c26c0904e       6e38f40d628db                                                                                         30 seconds ago       Running             storage-provisioner       2                   889c16eb0ab73       storage-provisioner
	I0318 12:44:47.439806    5712 command_runner.go:130] > 9fec05a61d2a9       4950bb10b3f87                                                                                         About a minute ago   Running             kindnet-cni               1                   5ecbdcbdad3fa       kindnet-kpt4f
	I0318 12:44:47.439956    5712 command_runner.go:130] > 787ade2ea2cd0       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   889c16eb0ab73       storage-provisioner
	I0318 12:44:47.439956    5712 command_runner.go:130] > 575b41a3a85a4       83f6cc407eed8                                                                                         About a minute ago   Running             kube-proxy                1                   7a2f0ccaf5c4c       kube-proxy-4dg79
	I0318 12:44:47.439956    5712 command_runner.go:130] > a48a6d310b868       7fe0e6f37db33                                                                                         About a minute ago   Running             kube-apiserver            0                   a7281d6e698ea       kube-apiserver-multinode-642600
	I0318 12:44:47.439956    5712 command_runner.go:130] > 14ae9398d33b1       d058aa5ab969c                                                                                         About a minute ago   Running             kube-controller-manager   1                   eca6768355c74       kube-controller-manager-multinode-642600
	I0318 12:44:47.440096    5712 command_runner.go:130] > bd1e4f4d262e3       e3db313c6dbc0                                                                                         About a minute ago   Running             kube-scheduler            1                   f62197122538f       kube-scheduler-multinode-642600
	I0318 12:44:47.440096    5712 command_runner.go:130] > 8e7911b58c587       73deb9a3f7025                                                                                         About a minute ago   Running             etcd                      0                   67004ee038ee4       etcd-multinode-642600
	I0318 12:44:47.440096    5712 command_runner.go:130] > a8dd2eacb7251       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   21 minutes ago       Exited              busybox                   0                   29bb4d534c2e2       busybox-5b5d89c9d6-48qkw
	I0318 12:44:47.440096    5712 command_runner.go:130] > e81f1d2fdb360       ead0a4a53df89                                                                                         25 minutes ago       Exited              coredns                   0                   ed38da653fbef       coredns-5dd5756b68-fgn7v
	I0318 12:44:47.440096    5712 command_runner.go:130] > 5cf42651cb21d       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              25 minutes ago       Exited              kindnet-cni               0                   fef37141be6db       kindnet-kpt4f
	I0318 12:44:47.440200    5712 command_runner.go:130] > 4bbad08fe59ac       83f6cc407eed8                                                                                         25 minutes ago       Exited              kube-proxy                0                   2f4709a3a45a4       kube-proxy-4dg79
	I0318 12:44:47.440200    5712 command_runner.go:130] > a54be44369019       d058aa5ab969c                                                                                         26 minutes ago       Exited              kube-controller-manager   0                   d766c4514f0bf       kube-controller-manager-multinode-642600
	I0318 12:44:47.440200    5712 command_runner.go:130] > 47777d4c0b90d       e3db313c6dbc0                                                                                         26 minutes ago       Exited              kube-scheduler            0                   3500a9f1ca84e       kube-scheduler-multinode-642600
	I0318 12:44:47.442785    5712 logs.go:123] Gathering logs for describe nodes ...
	I0318 12:44:47.442874    5712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 12:44:47.701711    5712 command_runner.go:130] > Name:               multinode-642600
	I0318 12:44:47.701711    5712 command_runner.go:130] > Roles:              control-plane
	I0318 12:44:47.701711    5712 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0318 12:44:47.701711    5712 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0318 12:44:47.701711    5712 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0318 12:44:47.701711    5712 command_runner.go:130] >                     kubernetes.io/hostname=multinode-642600
	I0318 12:44:47.701711    5712 command_runner.go:130] >                     kubernetes.io/os=linux
	I0318 12:44:47.701711    5712 command_runner.go:130] >                     minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd
	I0318 12:44:47.701711    5712 command_runner.go:130] >                     minikube.k8s.io/name=multinode-642600
	I0318 12:44:47.701711    5712 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0318 12:44:47.701711    5712 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_18T12_18_52_0700
	I0318 12:44:47.701711    5712 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0318 12:44:47.701711    5712 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0318 12:44:47.701711    5712 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0318 12:44:47.701711    5712 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0318 12:44:47.701711    5712 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0318 12:44:47.701711    5712 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0318 12:44:47.701711    5712 command_runner.go:130] > CreationTimestamp:  Mon, 18 Mar 2024 12:18:46 +0000
	I0318 12:44:47.701711    5712 command_runner.go:130] > Taints:             <none>
	I0318 12:44:47.701711    5712 command_runner.go:130] > Unschedulable:      false
	I0318 12:44:47.701711    5712 command_runner.go:130] > Lease:
	I0318 12:44:47.701711    5712 command_runner.go:130] >   HolderIdentity:  multinode-642600
	I0318 12:44:47.701711    5712 command_runner.go:130] >   AcquireTime:     <unset>
	I0318 12:44:47.701711    5712 command_runner.go:130] >   RenewTime:       Mon, 18 Mar 2024 12:44:41 +0000
	I0318 12:44:47.701711    5712 command_runner.go:130] > Conditions:
	I0318 12:44:47.701711    5712 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0318 12:44:47.702700    5712 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0318 12:44:47.702700    5712 command_runner.go:130] >   MemoryPressure   False   Mon, 18 Mar 2024 12:44:11 +0000   Mon, 18 Mar 2024 12:18:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0318 12:44:47.702700    5712 command_runner.go:130] >   DiskPressure     False   Mon, 18 Mar 2024 12:44:11 +0000   Mon, 18 Mar 2024 12:18:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0318 12:44:47.702700    5712 command_runner.go:130] >   PIDPressure      False   Mon, 18 Mar 2024 12:44:11 +0000   Mon, 18 Mar 2024 12:18:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Ready            True    Mon, 18 Mar 2024 12:44:11 +0000   Mon, 18 Mar 2024 12:44:11 +0000   KubeletReady                 kubelet is posting ready status
	I0318 12:44:47.702700    5712 command_runner.go:130] > Addresses:
	I0318 12:44:47.702700    5712 command_runner.go:130] >   InternalIP:  172.25.148.129
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Hostname:    multinode-642600
	I0318 12:44:47.702700    5712 command_runner.go:130] > Capacity:
	I0318 12:44:47.702700    5712 command_runner.go:130] >   cpu:                2
	I0318 12:44:47.702700    5712 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 12:44:47.702700    5712 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 12:44:47.702700    5712 command_runner.go:130] >   memory:             2164268Ki
	I0318 12:44:47.702700    5712 command_runner.go:130] >   pods:               110
	I0318 12:44:47.702700    5712 command_runner.go:130] > Allocatable:
	I0318 12:44:47.702700    5712 command_runner.go:130] >   cpu:                2
	I0318 12:44:47.702700    5712 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 12:44:47.702700    5712 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 12:44:47.702700    5712 command_runner.go:130] >   memory:             2164268Ki
	I0318 12:44:47.702700    5712 command_runner.go:130] >   pods:               110
	I0318 12:44:47.702700    5712 command_runner.go:130] > System Info:
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Machine ID:                 021cb44913fc4689ab25739f723ae3da
	I0318 12:44:47.702700    5712 command_runner.go:130] >   System UUID:                8a1bcbab-f132-7f42-b33a-a7db97e0afe6
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Boot ID:                    f11360a5-920e-4374-9d22-d06f111079d8
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Kernel Version:             5.10.207
	I0318 12:44:47.702700    5712 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Operating System:           linux
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Architecture:               amd64
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0318 12:44:47.702700    5712 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0318 12:44:47.702700    5712 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0318 12:44:47.702700    5712 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0318 12:44:47.702700    5712 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0318 12:44:47.702700    5712 command_runner.go:130] >   default                     busybox-5b5d89c9d6-48qkw                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0318 12:44:47.702700    5712 command_runner.go:130] >   kube-system                 coredns-5dd5756b68-fgn7v                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     25m
	I0318 12:44:47.702700    5712 command_runner.go:130] >   kube-system                 etcd-multinode-642600                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         76s
	I0318 12:44:47.702700    5712 command_runner.go:130] >   kube-system                 kindnet-kpt4f                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      25m
	I0318 12:44:47.702700    5712 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-642600             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	I0318 12:44:47.702700    5712 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-642600    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	I0318 12:44:47.702700    5712 command_runner.go:130] >   kube-system                 kube-proxy-4dg79                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	I0318 12:44:47.702700    5712 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-642600             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	I0318 12:44:47.702700    5712 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	I0318 12:44:47.702700    5712 command_runner.go:130] > Allocated resources:
	I0318 12:44:47.702700    5712 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Resource           Requests     Limits
	I0318 12:44:47.702700    5712 command_runner.go:130] >   --------           --------     ------
	I0318 12:44:47.702700    5712 command_runner.go:130] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0318 12:44:47.702700    5712 command_runner.go:130] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0318 12:44:47.702700    5712 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0318 12:44:47.702700    5712 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0318 12:44:47.702700    5712 command_runner.go:130] > Events:
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0318 12:44:47.702700    5712 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Normal  Starting                 25m                kube-proxy       
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Normal  Starting                 73s                kube-proxy       
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Normal  Starting                 26m                kubelet          Starting kubelet.
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Normal  NodeHasSufficientMemory  26m (x8 over 26m)  kubelet          Node multinode-642600 status is now: NodeHasSufficientMemory
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    26m (x8 over 26m)  kubelet          Node multinode-642600 status is now: NodeHasNoDiskPressure
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Normal  NodeHasSufficientPID     26m (x7 over 26m)  kubelet          Node multinode-642600 status is now: NodeHasSufficientPID
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Normal  NodeAllocatableEnforced  26m                kubelet          Updated Node Allocatable limit across pods
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Normal  Starting                 25m                kubelet          Starting kubelet.
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Normal  NodeHasSufficientPID     25m                kubelet          Node multinode-642600 status is now: NodeHasSufficientPID
	I0318 12:44:47.702700    5712 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    25m                kubelet          Node multinode-642600 status is now: NodeHasNoDiskPressure
	I0318 12:44:47.703717    5712 command_runner.go:130] >   Normal  NodeAllocatableEnforced  25m                kubelet          Updated Node Allocatable limit across pods
	I0318 12:44:47.703717    5712 command_runner.go:130] >   Normal  NodeHasSufficientMemory  25m                kubelet          Node multinode-642600 status is now: NodeHasSufficientMemory
	I0318 12:44:47.703717    5712 command_runner.go:130] >   Normal  RegisteredNode           25m                node-controller  Node multinode-642600 event: Registered Node multinode-642600 in Controller
	I0318 12:44:47.703785    5712 command_runner.go:130] >   Normal  NodeReady                25m                kubelet          Node multinode-642600 status is now: NodeReady
	I0318 12:44:47.703785    5712 command_runner.go:130] >   Normal  Starting                 83s                kubelet          Starting kubelet.
	I0318 12:44:47.703785    5712 command_runner.go:130] >   Normal  NodeHasSufficientMemory  83s (x8 over 83s)  kubelet          Node multinode-642600 status is now: NodeHasSufficientMemory
	I0318 12:44:47.703785    5712 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    83s (x8 over 83s)  kubelet          Node multinode-642600 status is now: NodeHasNoDiskPressure
	I0318 12:44:47.703785    5712 command_runner.go:130] >   Normal  NodeHasSufficientPID     83s (x7 over 83s)  kubelet          Node multinode-642600 status is now: NodeHasSufficientPID
	I0318 12:44:47.703785    5712 command_runner.go:130] >   Normal  NodeAllocatableEnforced  83s                kubelet          Updated Node Allocatable limit across pods
	I0318 12:44:47.703785    5712 command_runner.go:130] >   Normal  RegisteredNode           64s                node-controller  Node multinode-642600 event: Registered Node multinode-642600 in Controller
	I0318 12:44:47.703785    5712 command_runner.go:130] > Name:               multinode-642600-m02
	I0318 12:44:47.703785    5712 command_runner.go:130] > Roles:              <none>
	I0318 12:44:47.703785    5712 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0318 12:44:47.703785    5712 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0318 12:44:47.703785    5712 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0318 12:44:47.703785    5712 command_runner.go:130] >                     kubernetes.io/hostname=multinode-642600-m02
	I0318 12:44:47.703785    5712 command_runner.go:130] >                     kubernetes.io/os=linux
	I0318 12:44:47.703785    5712 command_runner.go:130] >                     minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd
	I0318 12:44:47.703785    5712 command_runner.go:130] >                     minikube.k8s.io/name=multinode-642600
	I0318 12:44:47.703785    5712 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0318 12:44:47.703785    5712 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_18T12_22_13_0700
	I0318 12:44:47.703785    5712 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0318 12:44:47.703785    5712 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0318 12:44:47.704355    5712 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0318 12:44:47.704355    5712 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0318 12:44:47.704355    5712 command_runner.go:130] > CreationTimestamp:  Mon, 18 Mar 2024 12:22:12 +0000
	I0318 12:44:47.704355    5712 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0318 12:44:47.704355    5712 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0318 12:44:47.704355    5712 command_runner.go:130] > Unschedulable:      false
	I0318 12:44:47.704355    5712 command_runner.go:130] > Lease:
	I0318 12:44:47.704355    5712 command_runner.go:130] >   HolderIdentity:  multinode-642600-m02
	I0318 12:44:47.704355    5712 command_runner.go:130] >   AcquireTime:     <unset>
	I0318 12:44:47.704355    5712 command_runner.go:130] >   RenewTime:       Mon, 18 Mar 2024 12:40:15 +0000
	I0318 12:44:47.704355    5712 command_runner.go:130] > Conditions:
	I0318 12:44:47.704355    5712 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0318 12:44:47.704355    5712 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0318 12:44:47.704355    5712 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 18 Mar 2024 12:38:34 +0000   Mon, 18 Mar 2024 12:44:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:47.704355    5712 command_runner.go:130] >   DiskPressure     Unknown   Mon, 18 Mar 2024 12:38:34 +0000   Mon, 18 Mar 2024 12:44:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:47.704575    5712 command_runner.go:130] >   PIDPressure      Unknown   Mon, 18 Mar 2024 12:38:34 +0000   Mon, 18 Mar 2024 12:44:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:47.704575    5712 command_runner.go:130] >   Ready            Unknown   Mon, 18 Mar 2024 12:38:34 +0000   Mon, 18 Mar 2024 12:44:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:47.704575    5712 command_runner.go:130] > Addresses:
	I0318 12:44:47.704575    5712 command_runner.go:130] >   InternalIP:  172.25.159.102
	I0318 12:44:47.704575    5712 command_runner.go:130] >   Hostname:    multinode-642600-m02
	I0318 12:44:47.704575    5712 command_runner.go:130] > Capacity:
	I0318 12:44:47.704575    5712 command_runner.go:130] >   cpu:                2
	I0318 12:44:47.704575    5712 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 12:44:47.704687    5712 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 12:44:47.704687    5712 command_runner.go:130] >   memory:             2164268Ki
	I0318 12:44:47.704687    5712 command_runner.go:130] >   pods:               110
	I0318 12:44:47.704687    5712 command_runner.go:130] > Allocatable:
	I0318 12:44:47.704687    5712 command_runner.go:130] >   cpu:                2
	I0318 12:44:47.704687    5712 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 12:44:47.704687    5712 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 12:44:47.704763    5712 command_runner.go:130] >   memory:             2164268Ki
	I0318 12:44:47.704763    5712 command_runner.go:130] >   pods:               110
	I0318 12:44:47.704763    5712 command_runner.go:130] > System Info:
	I0318 12:44:47.704763    5712 command_runner.go:130] >   Machine ID:                 3840c114554e41ff9ded1410244d8aba
	I0318 12:44:47.704763    5712 command_runner.go:130] >   System UUID:                23dbf5b1-f940-4749-8caf-1ae12d869a30
	I0318 12:44:47.704763    5712 command_runner.go:130] >   Boot ID:                    9a3fcab5-beb6-4505-b112-82809850bba3
	I0318 12:44:47.704763    5712 command_runner.go:130] >   Kernel Version:             5.10.207
	I0318 12:44:47.704763    5712 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0318 12:44:47.704763    5712 command_runner.go:130] >   Operating System:           linux
	I0318 12:44:47.704881    5712 command_runner.go:130] >   Architecture:               amd64
	I0318 12:44:47.704881    5712 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0318 12:44:47.704881    5712 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0318 12:44:47.704881    5712 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0318 12:44:47.704881    5712 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0318 12:44:47.704936    5712 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0318 12:44:47.704936    5712 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0318 12:44:47.704936    5712 command_runner.go:130] >   Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0318 12:44:47.704936    5712 command_runner.go:130] >   ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	I0318 12:44:47.704999    5712 command_runner.go:130] >   default                     busybox-5b5d89c9d6-hmhdf    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0318 12:44:47.704999    5712 command_runner.go:130] >   kube-system                 kindnet-d5llj               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      22m
	I0318 12:44:47.704999    5712 command_runner.go:130] >   kube-system                 kube-proxy-vts9f            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	I0318 12:44:47.705053    5712 command_runner.go:130] > Allocated resources:
	I0318 12:44:47.705053    5712 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0318 12:44:47.705053    5712 command_runner.go:130] >   Resource           Requests   Limits
	I0318 12:44:47.705115    5712 command_runner.go:130] >   --------           --------   ------
	I0318 12:44:47.705115    5712 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0318 12:44:47.705115    5712 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0318 12:44:47.705115    5712 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0318 12:44:47.705115    5712 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0318 12:44:47.705115    5712 command_runner.go:130] > Events:
	I0318 12:44:47.705115    5712 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0318 12:44:47.705169    5712 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0318 12:44:47.705169    5712 command_runner.go:130] >   Normal  Starting                 22m                kube-proxy       
	I0318 12:44:47.705169    5712 command_runner.go:130] >   Normal  NodeHasSufficientMemory  22m (x5 over 22m)  kubelet          Node multinode-642600-m02 status is now: NodeHasSufficientMemory
	I0318 12:44:47.705169    5712 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    22m (x5 over 22m)  kubelet          Node multinode-642600-m02 status is now: NodeHasNoDiskPressure
	I0318 12:44:47.705228    5712 command_runner.go:130] >   Normal  NodeHasSufficientPID     22m (x5 over 22m)  kubelet          Node multinode-642600-m02 status is now: NodeHasSufficientPID
	I0318 12:44:47.705228    5712 command_runner.go:130] >   Normal  RegisteredNode           22m                node-controller  Node multinode-642600-m02 event: Registered Node multinode-642600-m02 in Controller
	I0318 12:44:47.705267    5712 command_runner.go:130] >   Normal  NodeReady                22m                kubelet          Node multinode-642600-m02 status is now: NodeReady
	I0318 12:44:47.705267    5712 command_runner.go:130] >   Normal  RegisteredNode           64s                node-controller  Node multinode-642600-m02 event: Registered Node multinode-642600-m02 in Controller
	I0318 12:44:47.705267    5712 command_runner.go:130] >   Normal  NodeNotReady             24s                node-controller  Node multinode-642600-m02 status is now: NodeNotReady
	I0318 12:44:47.705313    5712 command_runner.go:130] > Name:               multinode-642600-m03
	I0318 12:44:47.705313    5712 command_runner.go:130] > Roles:              <none>
	I0318 12:44:47.705313    5712 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0318 12:44:47.705313    5712 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0318 12:44:47.705313    5712 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0318 12:44:47.705369    5712 command_runner.go:130] >                     kubernetes.io/hostname=multinode-642600-m03
	I0318 12:44:47.705369    5712 command_runner.go:130] >                     kubernetes.io/os=linux
	I0318 12:44:47.705369    5712 command_runner.go:130] >                     minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd
	I0318 12:44:47.705369    5712 command_runner.go:130] >                     minikube.k8s.io/name=multinode-642600
	I0318 12:44:47.705369    5712 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0318 12:44:47.705369    5712 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_18T12_38_47_0700
	I0318 12:44:47.705369    5712 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0318 12:44:47.705369    5712 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0318 12:44:47.705369    5712 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0318 12:44:47.705492    5712 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0318 12:44:47.705492    5712 command_runner.go:130] > CreationTimestamp:  Mon, 18 Mar 2024 12:38:46 +0000
	I0318 12:44:47.705492    5712 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0318 12:44:47.705544    5712 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0318 12:44:47.705544    5712 command_runner.go:130] > Unschedulable:      false
	I0318 12:44:47.705544    5712 command_runner.go:130] > Lease:
	I0318 12:44:47.705544    5712 command_runner.go:130] >   HolderIdentity:  multinode-642600-m03
	I0318 12:44:47.705544    5712 command_runner.go:130] >   AcquireTime:     <unset>
	I0318 12:44:47.705544    5712 command_runner.go:130] >   RenewTime:       Mon, 18 Mar 2024 12:39:48 +0000
	I0318 12:44:47.705544    5712 command_runner.go:130] > Conditions:
	I0318 12:44:47.705602    5712 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0318 12:44:47.705602    5712 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0318 12:44:47.705602    5712 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 18 Mar 2024 12:38:52 +0000   Mon, 18 Mar 2024 12:40:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:47.705602    5712 command_runner.go:130] >   DiskPressure     Unknown   Mon, 18 Mar 2024 12:38:52 +0000   Mon, 18 Mar 2024 12:40:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:47.705728    5712 command_runner.go:130] >   PIDPressure      Unknown   Mon, 18 Mar 2024 12:38:52 +0000   Mon, 18 Mar 2024 12:40:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:47.705830    5712 command_runner.go:130] >   Ready            Unknown   Mon, 18 Mar 2024 12:38:52 +0000   Mon, 18 Mar 2024 12:40:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 12:44:47.705853    5712 command_runner.go:130] > Addresses:
	I0318 12:44:47.705853    5712 command_runner.go:130] >   InternalIP:  172.25.157.200
	I0318 12:44:47.705887    5712 command_runner.go:130] >   Hostname:    multinode-642600-m03
	I0318 12:44:47.705887    5712 command_runner.go:130] > Capacity:
	I0318 12:44:47.705887    5712 command_runner.go:130] >   cpu:                2
	I0318 12:44:47.705887    5712 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 12:44:47.705887    5712 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 12:44:47.705887    5712 command_runner.go:130] >   memory:             2164268Ki
	I0318 12:44:47.705887    5712 command_runner.go:130] >   pods:               110
	I0318 12:44:47.705887    5712 command_runner.go:130] > Allocatable:
	I0318 12:44:47.705887    5712 command_runner.go:130] >   cpu:                2
	I0318 12:44:47.705955    5712 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 12:44:47.705955    5712 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 12:44:47.705955    5712 command_runner.go:130] >   memory:             2164268Ki
	I0318 12:44:47.705955    5712 command_runner.go:130] >   pods:               110
	I0318 12:44:47.706011    5712 command_runner.go:130] > System Info:
	I0318 12:44:47.706011    5712 command_runner.go:130] >   Machine ID:                 b858c7f1c1bc42a69e1927ccc26ea5ce
	I0318 12:44:47.706011    5712 command_runner.go:130] >   System UUID:                8c4fd36f-ab8b-5447-9df2-542afafc5ab4
	I0318 12:44:47.706011    5712 command_runner.go:130] >   Boot ID:                    cea0ecfe-24ab-4614-a808-1e2a7a960f26
	I0318 12:44:47.706074    5712 command_runner.go:130] >   Kernel Version:             5.10.207
	I0318 12:44:47.706074    5712 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0318 12:44:47.706074    5712 command_runner.go:130] >   Operating System:           linux
	I0318 12:44:47.706074    5712 command_runner.go:130] >   Architecture:               amd64
	I0318 12:44:47.706074    5712 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0318 12:44:47.706074    5712 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0318 12:44:47.706074    5712 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0318 12:44:47.706128    5712 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0318 12:44:47.706128    5712 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0318 12:44:47.706128    5712 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0318 12:44:47.706128    5712 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0318 12:44:47.706128    5712 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0318 12:44:47.706128    5712 command_runner.go:130] >   kube-system                 kindnet-thkjp       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17m
	I0318 12:44:47.706128    5712 command_runner.go:130] >   kube-system                 kube-proxy-khbjt    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	I0318 12:44:47.706128    5712 command_runner.go:130] > Allocated resources:
	I0318 12:44:47.706128    5712 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0318 12:44:47.706128    5712 command_runner.go:130] >   Resource           Requests   Limits
	I0318 12:44:47.706128    5712 command_runner.go:130] >   --------           --------   ------
	I0318 12:44:47.706128    5712 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0318 12:44:47.706128    5712 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0318 12:44:47.706128    5712 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0318 12:44:47.706128    5712 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0318 12:44:47.706128    5712 command_runner.go:130] > Events:
	I0318 12:44:47.706128    5712 command_runner.go:130] >   Type    Reason                   Age                  From             Message
	I0318 12:44:47.706128    5712 command_runner.go:130] >   ----    ------                   ----                 ----             -------
	I0318 12:44:47.706128    5712 command_runner.go:130] >   Normal  Starting                 17m                  kube-proxy       
	I0318 12:44:47.706128    5712 command_runner.go:130] >   Normal  Starting                 5m58s                kube-proxy       
	I0318 12:44:47.706128    5712 command_runner.go:130] >   Normal  NodeHasSufficientMemory  17m (x5 over 17m)    kubelet          Node multinode-642600-m03 status is now: NodeHasSufficientMemory
	I0318 12:44:47.706128    5712 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    17m (x5 over 17m)    kubelet          Node multinode-642600-m03 status is now: NodeHasNoDiskPressure
	I0318 12:44:47.706128    5712 command_runner.go:130] >   Normal  NodeHasSufficientPID     17m (x5 over 17m)    kubelet          Node multinode-642600-m03 status is now: NodeHasSufficientPID
	I0318 12:44:47.706128    5712 command_runner.go:130] >   Normal  NodeReady                17m                  kubelet          Node multinode-642600-m03 status is now: NodeReady
	I0318 12:44:47.706128    5712 command_runner.go:130] >   Normal  Starting                 6m1s                 kubelet          Starting kubelet.
	I0318 12:44:47.706128    5712 command_runner.go:130] >   Normal  NodeHasSufficientMemory  6m1s (x2 over 6m1s)  kubelet          Node multinode-642600-m03 status is now: NodeHasSufficientMemory
	I0318 12:44:47.706128    5712 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    6m1s (x2 over 6m1s)  kubelet          Node multinode-642600-m03 status is now: NodeHasNoDiskPressure
	I0318 12:44:47.706128    5712 command_runner.go:130] >   Normal  NodeHasSufficientPID     6m1s (x2 over 6m1s)  kubelet          Node multinode-642600-m03 status is now: NodeHasSufficientPID
	I0318 12:44:47.706128    5712 command_runner.go:130] >   Normal  NodeAllocatableEnforced  6m1s                 kubelet          Updated Node Allocatable limit across pods
	I0318 12:44:47.706128    5712 command_runner.go:130] >   Normal  RegisteredNode           6m                   node-controller  Node multinode-642600-m03 event: Registered Node multinode-642600-m03 in Controller
	I0318 12:44:47.706128    5712 command_runner.go:130] >   Normal  NodeReady                5m55s                kubelet          Node multinode-642600-m03 status is now: NodeReady
	I0318 12:44:47.706128    5712 command_runner.go:130] >   Normal  NodeNotReady             4m14s                node-controller  Node multinode-642600-m03 status is now: NodeNotReady
	I0318 12:44:47.706128    5712 command_runner.go:130] >   Normal  RegisteredNode           64s                  node-controller  Node multinode-642600-m03 event: Registered Node multinode-642600-m03 in Controller
	I0318 12:44:47.718256    5712 logs.go:123] Gathering logs for kube-scheduler [47777d4c0b90] ...
	I0318 12:44:47.718256    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47777d4c0b90"
	I0318 12:44:47.752951    5712 command_runner.go:130] ! I0318 12:18:43.828879       1 serving.go:348] Generated self-signed cert in-memory
	I0318 12:44:47.752951    5712 command_runner.go:130] ! W0318 12:18:46.562226       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0318 12:44:47.752951    5712 command_runner.go:130] ! W0318 12:18:46.562618       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 12:44:47.752951    5712 command_runner.go:130] ! W0318 12:18:46.562705       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0318 12:44:47.752951    5712 command_runner.go:130] ! W0318 12:18:46.562793       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 12:44:47.752951    5712 command_runner.go:130] ! I0318 12:18:46.615857       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0318 12:44:47.752951    5712 command_runner.go:130] ! I0318 12:18:46.615957       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:47.752951    5712 command_runner.go:130] ! I0318 12:18:46.622177       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 12:44:47.752951    5712 command_runner.go:130] ! I0318 12:18:46.622201       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 12:44:47.752951    5712 command_runner.go:130] ! I0318 12:18:46.625084       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0318 12:44:47.752951    5712 command_runner.go:130] ! I0318 12:18:46.625162       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 12:44:47.752951    5712 command_runner.go:130] ! W0318 12:18:46.631110       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 12:44:47.752951    5712 command_runner.go:130] ! E0318 12:18:46.631164       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 12:44:47.752951    5712 command_runner.go:130] ! W0318 12:18:46.634891       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 12:44:47.752951    5712 command_runner.go:130] ! E0318 12:18:46.634917       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 12:44:47.752951    5712 command_runner.go:130] ! W0318 12:18:46.636313       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0318 12:44:47.752951    5712 command_runner.go:130] ! E0318 12:18:46.638655       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0318 12:44:47.752951    5712 command_runner.go:130] ! W0318 12:18:46.636730       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! E0318 12:18:46.639099       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! W0318 12:18:46.636905       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! E0318 12:18:46.639254       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! W0318 12:18:46.636986       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! E0318 12:18:46.639495       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! W0318 12:18:46.641683       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! E0318 12:18:46.641953       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! W0318 12:18:46.642236       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! E0318 12:18:46.642375       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! W0318 12:18:46.642673       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! W0318 12:18:46.646073       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! E0318 12:18:46.647270       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! W0318 12:18:46.646147       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! E0318 12:18:46.647534       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! W0318 12:18:46.646208       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! E0318 12:18:46.647719       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! W0318 12:18:46.646271       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! E0318 12:18:46.647738       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! W0318 12:18:46.646322       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! E0318 12:18:46.647752       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! E0318 12:18:46.647915       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! W0318 12:18:46.650301       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! E0318 12:18:46.650528       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! W0318 12:18:47.471960       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! E0318 12:18:47.472093       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! W0318 12:18:47.540921       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! E0318 12:18:47.541368       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! W0318 12:18:47.545171       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! E0318 12:18:47.546126       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! W0318 12:18:47.563772       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! E0318 12:18:47.563806       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! W0318 12:18:47.597770       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! E0318 12:18:47.597873       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 12:44:47.753940    5712 command_runner.go:130] ! W0318 12:18:47.684794       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 12:44:47.754972    5712 command_runner.go:130] ! E0318 12:18:47.685008       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 12:44:47.754972    5712 command_runner.go:130] ! W0318 12:18:47.685352       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:47.755046    5712 command_runner.go:130] ! E0318 12:18:47.685509       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:47.755046    5712 command_runner.go:130] ! W0318 12:18:47.840132       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 12:44:47.755046    5712 command_runner.go:130] ! E0318 12:18:47.840303       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 12:44:47.755046    5712 command_runner.go:130] ! W0318 12:18:47.879838       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 12:44:47.755046    5712 command_runner.go:130] ! E0318 12:18:47.880363       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 12:44:47.755046    5712 command_runner.go:130] ! W0318 12:18:47.906171       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:47.755046    5712 command_runner.go:130] ! E0318 12:18:47.906493       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:47.755046    5712 command_runner.go:130] ! W0318 12:18:48.059997       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 12:44:47.755046    5712 command_runner.go:130] ! E0318 12:18:48.060049       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 12:44:47.755046    5712 command_runner.go:130] ! W0318 12:18:48.096160       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:47.755046    5712 command_runner.go:130] ! E0318 12:18:48.096304       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 12:44:47.755046    5712 command_runner.go:130] ! W0318 12:18:48.096504       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 12:44:47.755046    5712 command_runner.go:130] ! E0318 12:18:48.096662       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 12:44:47.755046    5712 command_runner.go:130] ! W0318 12:18:48.133175       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 12:44:47.755046    5712 command_runner.go:130] ! E0318 12:18:48.133469       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 12:44:47.755046    5712 command_runner.go:130] ! W0318 12:18:48.135066       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0318 12:44:47.755599    5712 command_runner.go:130] ! E0318 12:18:48.135196       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0318 12:44:47.755599    5712 command_runner.go:130] ! I0318 12:18:50.022459       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 12:44:47.755599    5712 command_runner.go:130] ! E0318 12:40:51.995231       1 run.go:74] "command failed" err="finished without leader elect"
	I0318 12:44:47.768181    5712 logs.go:123] Gathering logs for kindnet [9fec05a61d2a] ...
	I0318 12:44:47.768181    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fec05a61d2a"
	I0318 12:44:47.800186    5712 command_runner.go:130] ! I0318 12:43:33.429181       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0318 12:44:47.800323    5712 command_runner.go:130] ! I0318 12:43:33.431032       1 main.go:107] hostIP = 172.25.148.129
	I0318 12:44:47.800379    5712 command_runner.go:130] ! podIP = 172.25.148.129
	I0318 12:44:47.800379    5712 command_runner.go:130] ! I0318 12:43:33.432708       1 main.go:116] setting mtu 1500 for CNI 
	I0318 12:44:47.800379    5712 command_runner.go:130] ! I0318 12:43:33.432750       1 main.go:146] kindnetd IP family: "ipv4"
	I0318 12:44:47.800379    5712 command_runner.go:130] ! I0318 12:43:33.432773       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0318 12:44:47.800379    5712 command_runner.go:130] ! I0318 12:44:03.855331       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0318 12:44:47.800472    5712 command_runner.go:130] ! I0318 12:44:03.906638       1 main.go:223] Handling node with IPs: map[172.25.148.129:{}]
	I0318 12:44:47.800472    5712 command_runner.go:130] ! I0318 12:44:03.906763       1 main.go:227] handling current node
	I0318 12:44:47.800472    5712 command_runner.go:130] ! I0318 12:44:03.907280       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.800472    5712 command_runner.go:130] ! I0318 12:44:03.907371       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.800539    5712 command_runner.go:130] ! I0318 12:44:03.907763       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.25.159.102 Flags: [] Table: 0} 
	I0318 12:44:47.800539    5712 command_runner.go:130] ! I0318 12:44:03.907983       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:47.800539    5712 command_runner.go:130] ! I0318 12:44:03.907999       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:47.800539    5712 command_runner.go:130] ! I0318 12:44:03.908063       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.25.157.200 Flags: [] Table: 0} 
	I0318 12:44:47.800539    5712 command_runner.go:130] ! I0318 12:44:13.926166       1 main.go:223] Handling node with IPs: map[172.25.148.129:{}]
	I0318 12:44:47.800610    5712 command_runner.go:130] ! I0318 12:44:13.926260       1 main.go:227] handling current node
	I0318 12:44:47.800610    5712 command_runner.go:130] ! I0318 12:44:13.926281       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.800610    5712 command_runner.go:130] ! I0318 12:44:13.926377       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.800610    5712 command_runner.go:130] ! I0318 12:44:13.927231       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:47.800694    5712 command_runner.go:130] ! I0318 12:44:13.927364       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:47.800694    5712 command_runner.go:130] ! I0318 12:44:23.943396       1 main.go:223] Handling node with IPs: map[172.25.148.129:{}]
	I0318 12:44:47.800694    5712 command_runner.go:130] ! I0318 12:44:23.943437       1 main.go:227] handling current node
	I0318 12:44:47.800694    5712 command_runner.go:130] ! I0318 12:44:23.943450       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.800752    5712 command_runner.go:130] ! I0318 12:44:23.943456       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.800752    5712 command_runner.go:130] ! I0318 12:44:23.943816       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:47.800752    5712 command_runner.go:130] ! I0318 12:44:23.943956       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:47.800752    5712 command_runner.go:130] ! I0318 12:44:33.951114       1 main.go:223] Handling node with IPs: map[172.25.148.129:{}]
	I0318 12:44:47.800853    5712 command_runner.go:130] ! I0318 12:44:33.951215       1 main.go:227] handling current node
	I0318 12:44:47.800853    5712 command_runner.go:130] ! I0318 12:44:33.951232       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.800853    5712 command_runner.go:130] ! I0318 12:44:33.951241       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.800909    5712 command_runner.go:130] ! I0318 12:44:33.951807       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:47.800909    5712 command_runner.go:130] ! I0318 12:44:33.951927       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:47.800909    5712 command_runner.go:130] ! I0318 12:44:43.968530       1 main.go:223] Handling node with IPs: map[172.25.148.129:{}]
	I0318 12:44:47.800909    5712 command_runner.go:130] ! I0318 12:44:43.968658       1 main.go:227] handling current node
	I0318 12:44:47.800909    5712 command_runner.go:130] ! I0318 12:44:43.968737       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:44:47.800985    5712 command_runner.go:130] ! I0318 12:44:43.968990       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:44:47.800985    5712 command_runner.go:130] ! I0318 12:44:43.969485       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:44:47.800985    5712 command_runner.go:130] ! I0318 12:44:43.969715       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:44:47.805222    5712 logs.go:123] Gathering logs for kube-scheduler [bd1e4f4d262e] ...
	I0318 12:44:47.805291    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd1e4f4d262e"
	I0318 12:44:47.837374    5712 command_runner.go:130] ! I0318 12:43:27.649061       1 serving.go:348] Generated self-signed cert in-memory
	I0318 12:44:47.837871    5712 command_runner.go:130] ! W0318 12:43:30.548831       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0318 12:44:47.837871    5712 command_runner.go:130] ! W0318 12:43:30.549092       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 12:44:47.837871    5712 command_runner.go:130] ! W0318 12:43:30.549282       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0318 12:44:47.838003    5712 command_runner.go:130] ! W0318 12:43:30.549461       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 12:44:47.838003    5712 command_runner.go:130] ! I0318 12:43:30.613305       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0318 12:44:47.838003    5712 command_runner.go:130] ! I0318 12:43:30.613417       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:47.838003    5712 command_runner.go:130] ! I0318 12:43:30.618512       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 12:44:47.838003    5712 command_runner.go:130] ! I0318 12:43:30.619171       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0318 12:44:47.838003    5712 command_runner.go:130] ! I0318 12:43:30.619276       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 12:44:47.838003    5712 command_runner.go:130] ! I0318 12:43:30.620071       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 12:44:47.838003    5712 command_runner.go:130] ! I0318 12:43:30.720411       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 12:44:47.841047    5712 logs.go:123] Gathering logs for kube-proxy [4bbad08fe59a] ...
	I0318 12:44:47.841047    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bbad08fe59a"
	I0318 12:44:47.873258    5712 command_runner.go:130] ! I0318 12:19:04.970720       1 server_others.go:69] "Using iptables proxy"
	I0318 12:44:47.873258    5712 command_runner.go:130] ! I0318 12:19:04.997380       1 node.go:141] Successfully retrieved node IP: 172.25.151.112
	I0318 12:44:47.873258    5712 command_runner.go:130] ! I0318 12:19:05.099028       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 12:44:47.873258    5712 command_runner.go:130] ! I0318 12:19:05.099065       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 12:44:47.873258    5712 command_runner.go:130] ! I0318 12:19:05.102885       1 server_others.go:152] "Using iptables Proxier"
	I0318 12:44:47.873258    5712 command_runner.go:130] ! I0318 12:19:05.103013       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 12:44:47.873258    5712 command_runner.go:130] ! I0318 12:19:05.103652       1 server.go:846] "Version info" version="v1.28.4"
	I0318 12:44:47.873258    5712 command_runner.go:130] ! I0318 12:19:05.103704       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:47.873258    5712 command_runner.go:130] ! I0318 12:19:05.105505       1 config.go:188] "Starting service config controller"
	I0318 12:44:47.873258    5712 command_runner.go:130] ! I0318 12:19:05.106093       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 12:44:47.873987    5712 command_runner.go:130] ! I0318 12:19:05.106131       1 config.go:97] "Starting endpoint slice config controller"
	I0318 12:44:47.873987    5712 command_runner.go:130] ! I0318 12:19:05.106138       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 12:44:47.873987    5712 command_runner.go:130] ! I0318 12:19:05.107424       1 config.go:315] "Starting node config controller"
	I0318 12:44:47.873987    5712 command_runner.go:130] ! I0318 12:19:05.107456       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 12:44:47.873987    5712 command_runner.go:130] ! I0318 12:19:05.206699       1 shared_informer.go:318] Caches are synced for service config
	I0318 12:44:47.873987    5712 command_runner.go:130] ! I0318 12:19:05.206811       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 12:44:47.873987    5712 command_runner.go:130] ! I0318 12:19:05.207857       1 shared_informer.go:318] Caches are synced for node config
	I0318 12:44:47.876492    5712 logs.go:123] Gathering logs for kube-apiserver [a48a6d310b86] ...
	I0318 12:44:47.876576    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a48a6d310b86"
	I0318 12:44:47.903035    5712 command_runner.go:130] ! I0318 12:43:26.873064       1 options.go:220] external host was not specified, using 172.25.148.129
	I0318 12:44:47.903035    5712 command_runner.go:130] ! I0318 12:43:26.879001       1 server.go:148] Version: v1.28.4
	I0318 12:44:47.903035    5712 command_runner.go:130] ! I0318 12:43:26.879883       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:27.623853       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:27.658081       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:27.658128       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:27.660963       1 instance.go:298] Using reconciler: lease
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:27.814829       1 handler.go:232] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:27.815233       1 genericapiserver.go:744] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:28.557814       1 handler.go:232] Adding GroupVersion  v1 to ResourceManager
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:28.558168       1 instance.go:709] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:29.283146       1 instance.go:709] API group "resource.k8s.io" is not enabled, skipping.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:29.346403       1 handler.go:232] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.360856       1 genericapiserver.go:744] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.360910       1 genericapiserver.go:744] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:29.361419       1 handler.go:232] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.361431       1 genericapiserver.go:744] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:29.362356       1 handler.go:232] Adding GroupVersion autoscaling v2 to ResourceManager
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:29.365115       1 handler.go:232] Adding GroupVersion autoscaling v1 to ResourceManager
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.365134       1 genericapiserver.go:744] Skipping API autoscaling/v2beta1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.365140       1 genericapiserver.go:744] Skipping API autoscaling/v2beta2 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:29.370774       1 handler.go:232] Adding GroupVersion batch v1 to ResourceManager
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.370809       1 genericapiserver.go:744] Skipping API batch/v1beta1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:29.375063       1 handler.go:232] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.375102       1 genericapiserver.go:744] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.375108       1 genericapiserver.go:744] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:29.375862       1 handler.go:232] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.375929       1 genericapiserver.go:744] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.375979       1 genericapiserver.go:744] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:29.376693       1 handler.go:232] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:29.384185       1 handler.go:232] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.384228       1 genericapiserver.go:744] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.384236       1 genericapiserver.go:744] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:29.385110       1 handler.go:232] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.385148       1 genericapiserver.go:744] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.385155       1 genericapiserver.go:744] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:29.388232       1 handler.go:232] Adding GroupVersion policy v1 to ResourceManager
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.388272       1 genericapiserver.go:744] Skipping API policy/v1beta1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:29.392835       1 handler.go:232] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.392872       1 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.392880       1 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:29.393504       1 handler.go:232] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.393628       1 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.393636       1 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:29.401801       1 handler.go:232] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.401838       1 genericapiserver.go:744] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.401846       1 genericapiserver.go:744] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:29.405508       1 handler.go:232] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:29.409452       1 handler.go:232] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta2 to ResourceManager
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.409492       1 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.409500       1 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:47.904057    5712 command_runner.go:130] ! I0318 12:43:29.421682       1 handler.go:232] Adding GroupVersion apps v1 to ResourceManager
	I0318 12:44:47.904057    5712 command_runner.go:130] ! W0318 12:43:29.421819       1 genericapiserver.go:744] Skipping API apps/v1beta2 because it has no resources.
	I0318 12:44:47.905098    5712 command_runner.go:130] ! W0318 12:43:29.421829       1 genericapiserver.go:744] Skipping API apps/v1beta1 because it has no resources.
	I0318 12:44:47.905098    5712 command_runner.go:130] ! I0318 12:43:29.426368       1 handler.go:232] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0318 12:44:47.905098    5712 command_runner.go:130] ! W0318 12:43:29.426405       1 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:47.905098    5712 command_runner.go:130] ! W0318 12:43:29.426413       1 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0318 12:44:47.905098    5712 command_runner.go:130] ! I0318 12:43:29.427337       1 handler.go:232] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0318 12:44:47.905098    5712 command_runner.go:130] ! W0318 12:43:29.427376       1 genericapiserver.go:744] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:47.905098    5712 command_runner.go:130] ! I0318 12:43:29.459555       1 handler.go:232] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0318 12:44:47.905098    5712 command_runner.go:130] ! W0318 12:43:29.459595       1 genericapiserver.go:744] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0318 12:44:47.905098    5712 command_runner.go:130] ! I0318 12:43:30.367734       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 12:44:47.905098    5712 command_runner.go:130] ! I0318 12:43:30.367932       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 12:44:47.905331    5712 command_runner.go:130] ! I0318 12:43:30.368782       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0318 12:44:47.905331    5712 command_runner.go:130] ! I0318 12:43:30.370542       1 secure_serving.go:213] Serving securely on [::]:8443
	I0318 12:44:47.905331    5712 command_runner.go:130] ! I0318 12:43:30.370628       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 12:44:47.905331    5712 command_runner.go:130] ! I0318 12:43:30.371667       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0318 12:44:47.905331    5712 command_runner.go:130] ! I0318 12:43:30.372321       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0318 12:44:47.905474    5712 command_runner.go:130] ! I0318 12:43:30.372682       1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
	I0318 12:44:47.905474    5712 command_runner.go:130] ! I0318 12:43:30.373559       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0318 12:44:47.905474    5712 command_runner.go:130] ! I0318 12:43:30.373947       1 handler_discovery.go:412] Starting ResourceDiscoveryManager
	I0318 12:44:47.905474    5712 command_runner.go:130] ! I0318 12:43:30.374159       1 available_controller.go:423] Starting AvailableConditionController
	I0318 12:44:47.905474    5712 command_runner.go:130] ! I0318 12:43:30.374194       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0318 12:44:47.905474    5712 command_runner.go:130] ! I0318 12:43:30.374404       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0318 12:44:47.905474    5712 command_runner.go:130] ! I0318 12:43:30.374979       1 aggregator.go:164] waiting for initial CRD sync...
	I0318 12:44:47.905474    5712 command_runner.go:130] ! I0318 12:43:30.375087       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0318 12:44:47.905474    5712 command_runner.go:130] ! I0318 12:43:30.375452       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0318 12:44:47.905589    5712 command_runner.go:130] ! I0318 12:43:30.376837       1 controller.go:116] Starting legacy_token_tracking_controller
	I0318 12:44:47.905589    5712 command_runner.go:130] ! I0318 12:43:30.377105       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I0318 12:44:47.905589    5712 command_runner.go:130] ! I0318 12:43:30.377485       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0318 12:44:47.905589    5712 command_runner.go:130] ! I0318 12:43:30.378013       1 controller.go:78] Starting OpenAPI AggregationController
	I0318 12:44:47.905589    5712 command_runner.go:130] ! I0318 12:43:30.378732       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0318 12:44:47.905704    5712 command_runner.go:130] ! I0318 12:43:30.379224       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0318 12:44:47.905837    5712 command_runner.go:130] ! I0318 12:43:30.379834       1 apf_controller.go:372] Starting API Priority and Fairness config controller
	I0318 12:44:47.905837    5712 command_runner.go:130] ! I0318 12:43:30.380470       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 12:44:47.905837    5712 command_runner.go:130] ! I0318 12:43:30.380848       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 12:44:47.905837    5712 command_runner.go:130] ! I0318 12:43:30.382047       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0318 12:44:47.905837    5712 command_runner.go:130] ! I0318 12:43:30.382230       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0318 12:44:47.905837    5712 command_runner.go:130] ! I0318 12:43:30.383964       1 controller.go:134] Starting OpenAPI controller
	I0318 12:44:47.905837    5712 command_runner.go:130] ! I0318 12:43:30.384158       1 controller.go:85] Starting OpenAPI V3 controller
	I0318 12:44:47.906006    5712 command_runner.go:130] ! I0318 12:43:30.384420       1 naming_controller.go:291] Starting NamingConditionController
	I0318 12:44:47.906006    5712 command_runner.go:130] ! I0318 12:43:30.384790       1 establishing_controller.go:76] Starting EstablishingController
	I0318 12:44:47.906006    5712 command_runner.go:130] ! I0318 12:43:30.385986       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0318 12:44:47.906006    5712 command_runner.go:130] ! I0318 12:43:30.386163       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0318 12:44:47.906006    5712 command_runner.go:130] ! I0318 12:43:30.386327       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0318 12:44:47.906115    5712 command_runner.go:130] ! I0318 12:43:30.474963       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0318 12:44:47.906115    5712 command_runner.go:130] ! I0318 12:43:30.476622       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0318 12:44:47.906115    5712 command_runner.go:130] ! I0318 12:43:30.496736       1 shared_informer.go:318] Caches are synced for configmaps
	I0318 12:44:47.906250    5712 command_runner.go:130] ! I0318 12:43:30.497067       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0318 12:44:47.906250    5712 command_runner.go:130] ! I0318 12:43:30.497511       1 aggregator.go:166] initial CRD sync complete...
	I0318 12:44:47.906250    5712 command_runner.go:130] ! I0318 12:43:30.498503       1 autoregister_controller.go:141] Starting autoregister controller
	I0318 12:44:47.906250    5712 command_runner.go:130] ! I0318 12:43:30.498662       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0318 12:44:47.906250    5712 command_runner.go:130] ! I0318 12:43:30.498825       1 cache.go:39] Caches are synced for autoregister controller
	I0318 12:44:47.906250    5712 command_runner.go:130] ! I0318 12:43:30.570075       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0318 12:44:47.906250    5712 command_runner.go:130] ! I0318 12:43:30.585880       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0318 12:44:47.906250    5712 command_runner.go:130] ! I0318 12:43:30.624565       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0318 12:44:47.906366    5712 command_runner.go:130] ! I0318 12:43:30.681515       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0318 12:44:47.906366    5712 command_runner.go:130] ! I0318 12:43:30.681604       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0318 12:44:47.906366    5712 command_runner.go:130] ! I0318 12:43:31.410513       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0318 12:44:47.906366    5712 command_runner.go:130] ! W0318 12:43:31.917736       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.25.148.129 172.25.151.112]
	I0318 12:44:47.906366    5712 command_runner.go:130] ! I0318 12:43:31.919293       1 controller.go:624] quota admission added evaluator for: endpoints
	I0318 12:44:47.906366    5712 command_runner.go:130] ! I0318 12:43:31.929122       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0318 12:44:47.906366    5712 command_runner.go:130] ! I0318 12:43:34.160688       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0318 12:44:47.906481    5712 command_runner.go:130] ! I0318 12:43:34.367742       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0318 12:44:47.906481    5712 command_runner.go:130] ! I0318 12:43:34.406080       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0318 12:44:47.906481    5712 command_runner.go:130] ! I0318 12:43:34.542647       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0318 12:44:47.906481    5712 command_runner.go:130] ! I0318 12:43:34.562855       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0318 12:44:47.906481    5712 command_runner.go:130] ! W0318 12:43:51.920595       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.25.148.129]
	I0318 12:44:47.915404    5712 logs.go:123] Gathering logs for coredns [fcf17db92b35] ...
	I0318 12:44:47.915445    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcf17db92b35"
	I0318 12:44:47.947272    5712 command_runner.go:130] > .:53
	I0318 12:44:47.947376    5712 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 07d6393480c36cc6b464d3853a5e32028517fcba50e93adef34ce624ca099b3a1e269a86e99bf5086a15610de9e11b2980c233f8d3dcbff38f702488f0fd5328
	I0318 12:44:47.947376    5712 command_runner.go:130] > CoreDNS-1.10.1
	I0318 12:44:47.947376    5712 command_runner.go:130] > linux/amd64, go1.20, 055b2c3
	I0318 12:44:47.947376    5712 command_runner.go:130] > [INFO] 127.0.0.1:53681 - 55845 "HINFO IN 162544917519141994.8165783507281513505. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.028223444s
	I0318 12:44:47.947623    5712 logs.go:123] Gathering logs for coredns [e81f1d2fdb36] ...
	I0318 12:44:47.947753    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e81f1d2fdb36"
	I0318 12:44:47.983213    5712 command_runner.go:130] > .:53
	I0318 12:44:47.983213    5712 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 07d6393480c36cc6b464d3853a5e32028517fcba50e93adef34ce624ca099b3a1e269a86e99bf5086a15610de9e11b2980c233f8d3dcbff38f702488f0fd5328
	I0318 12:44:47.983213    5712 command_runner.go:130] > CoreDNS-1.10.1
	I0318 12:44:47.983213    5712 command_runner.go:130] > linux/amd64, go1.20, 055b2c3
	I0318 12:44:47.983213    5712 command_runner.go:130] > [INFO] 127.0.0.1:48183 - 41539 "HINFO IN 767578685007701398.8900982300391989616. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.040167772s
	I0318 12:44:47.983213    5712 command_runner.go:130] > [INFO] 10.244.0.3:56190 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000320901s
	I0318 12:44:47.983213    5712 command_runner.go:130] > [INFO] 10.244.0.3:43050 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.04023503s
	I0318 12:44:47.983213    5712 command_runner.go:130] > [INFO] 10.244.0.3:47302 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.158419612s
	I0318 12:44:47.984292    5712 command_runner.go:130] > [INFO] 10.244.0.3:37199 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.162590352s
	I0318 12:44:47.984292    5712 command_runner.go:130] > [INFO] 10.244.1.2:48003 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000216101s
	I0318 12:44:47.984359    5712 command_runner.go:130] > [INFO] 10.244.1.2:48857 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000380201s
	I0318 12:44:47.984465    5712 command_runner.go:130] > [INFO] 10.244.1.2:52412 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000070401s
	I0318 12:44:47.984567    5712 command_runner.go:130] > [INFO] 10.244.1.2:59362 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000071801s
	I0318 12:44:47.984567    5712 command_runner.go:130] > [INFO] 10.244.0.3:38833 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000250501s
	I0318 12:44:47.984567    5712 command_runner.go:130] > [INFO] 10.244.0.3:34860 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.064163607s
	I0318 12:44:47.984659    5712 command_runner.go:130] > [INFO] 10.244.0.3:45210 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000227601s
	I0318 12:44:47.984659    5712 command_runner.go:130] > [INFO] 10.244.0.3:32804 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001229s
	I0318 12:44:47.984659    5712 command_runner.go:130] > [INFO] 10.244.0.3:44904 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.01563145s
	I0318 12:44:47.984659    5712 command_runner.go:130] > [INFO] 10.244.0.3:34958 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002035s
	I0318 12:44:47.984750    5712 command_runner.go:130] > [INFO] 10.244.0.3:59094 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001507s
	I0318 12:44:47.984750    5712 command_runner.go:130] > [INFO] 10.244.0.3:39370 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000181001s
	I0318 12:44:47.984750    5712 command_runner.go:130] > [INFO] 10.244.1.2:40318 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000302101s
	I0318 12:44:47.984750    5712 command_runner.go:130] > [INFO] 10.244.1.2:43523 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001489s
	I0318 12:44:47.984750    5712 command_runner.go:130] > [INFO] 10.244.1.2:47882 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001346s
	I0318 12:44:47.984842    5712 command_runner.go:130] > [INFO] 10.244.1.2:38222 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000057401s
	I0318 12:44:47.984842    5712 command_runner.go:130] > [INFO] 10.244.1.2:49068 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001253s
	I0318 12:44:47.984842    5712 command_runner.go:130] > [INFO] 10.244.1.2:35375 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000582s
	I0318 12:44:47.984842    5712 command_runner.go:130] > [INFO] 10.244.1.2:40933 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000179201s
	I0318 12:44:47.984842    5712 command_runner.go:130] > [INFO] 10.244.1.2:36014 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0002051s
	I0318 12:44:47.984933    5712 command_runner.go:130] > [INFO] 10.244.0.3:37733 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000265401s
	I0318 12:44:47.984933    5712 command_runner.go:130] > [INFO] 10.244.0.3:52912 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148001s
	I0318 12:44:47.984933    5712 command_runner.go:130] > [INFO] 10.244.0.3:33147 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000143701s
	I0318 12:44:47.984933    5712 command_runner.go:130] > [INFO] 10.244.0.3:49893 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000536s
	I0318 12:44:47.984933    5712 command_runner.go:130] > [INFO] 10.244.1.2:42681 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001221s
	I0318 12:44:47.985022    5712 command_runner.go:130] > [INFO] 10.244.1.2:41416 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143s
	I0318 12:44:47.985022    5712 command_runner.go:130] > [INFO] 10.244.1.2:58254 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000241501s
	I0318 12:44:47.985022    5712 command_runner.go:130] > [INFO] 10.244.1.2:35844 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000197201s
	I0318 12:44:47.985113    5712 command_runner.go:130] > [INFO] 10.244.0.3:33559 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102201s
	I0318 12:44:47.985113    5712 command_runner.go:130] > [INFO] 10.244.0.3:53963 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000158701s
	I0318 12:44:47.985113    5712 command_runner.go:130] > [INFO] 10.244.0.3:41406 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001297s
	I0318 12:44:47.985113    5712 command_runner.go:130] > [INFO] 10.244.0.3:34685 - 5 "PTR IN 1.144.25.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000264001s
	I0318 12:44:47.985113    5712 command_runner.go:130] > [INFO] 10.244.1.2:43312 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001178s
	I0318 12:44:47.985113    5712 command_runner.go:130] > [INFO] 10.244.1.2:55281 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000235501s
	I0318 12:44:47.985113    5712 command_runner.go:130] > [INFO] 10.244.1.2:34710 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000874s
	I0318 12:44:47.985113    5712 command_runner.go:130] > [INFO] 10.244.1.2:57686 - 5 "PTR IN 1.144.25.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000557s
	I0318 12:44:47.985113    5712 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0318 12:44:47.985113    5712 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0318 12:44:47.988652    5712 logs.go:123] Gathering logs for etcd [8e7911b58c58] ...
	I0318 12:44:47.988708    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7911b58c58"
	I0318 12:44:48.020960    5712 command_runner.go:130] ! {"level":"warn","ts":"2024-03-18T12:43:26.200481Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0318 12:44:48.021471    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.210029Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.25.148.129:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.25.148.129:2380","--initial-cluster=multinode-642600=https://172.25.148.129:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.25.148.129:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.25.148.129:2380","--name=multinode-642600","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0318 12:44:48.021471    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.210181Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0318 12:44:48.021543    5712 command_runner.go:130] ! {"level":"warn","ts":"2024-03-18T12:43:26.21031Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0318 12:44:48.021543    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.210331Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.25.148.129:2380"]}
	I0318 12:44:48.021618    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.210546Z","caller":"embed/etcd.go:495","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0318 12:44:48.021700    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.222773Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.25.148.129:2379"]}
	I0318 12:44:48.021860    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.228178Z","caller":"embed/etcd.go:309","msg":"starting an etcd server","etcd-version":"3.5.9","git-sha":"bdbbde998","go-version":"go1.19.9","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-642600","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.25.148.129:2380"],"listen-peer-urls":["https://172.25.148.129:2380"],"advertise-client-urls":["https://172.25.148.129:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.25.148.129:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"init
ial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0318 12:44:48.021860    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.271498Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"41.739133ms"}
	I0318 12:44:48.021860    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.299465Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0318 12:44:48.021860    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.319578Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"31713adf8492fbc4","local-member-id":"78764271becab2d0","commit-index":2138}
	I0318 12:44:48.021860    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.319995Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 switched to configuration voters=()"}
	I0318 12:44:48.021860    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.320107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 became follower at term 2"}
	I0318 12:44:48.021860    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.320138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 78764271becab2d0 [peers: [], term: 2, commit: 2138, applied: 0, lastindex: 2138, lastterm: 2]"}
	I0318 12:44:48.021860    5712 command_runner.go:130] ! {"level":"warn","ts":"2024-03-18T12:43:26.325366Z","caller":"auth/store.go:1238","msg":"simple token is not cryptographically signed"}
	I0318 12:44:48.021860    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.329191Z","caller":"mvcc/kvstore.go:323","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1388}
	I0318 12:44:48.021860    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.333388Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":1848}
	I0318 12:44:48.021860    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.357951Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0318 12:44:48.021860    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.372436Z","caller":"etcdserver/corrupt.go:95","msg":"starting initial corruption check","local-member-id":"78764271becab2d0","timeout":"7s"}
	I0318 12:44:48.021860    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.373126Z","caller":"etcdserver/corrupt.go:165","msg":"initial corruption checking passed; no corruption","local-member-id":"78764271becab2d0"}
	I0318 12:44:48.021860    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.373252Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"78764271becab2d0","local-server-version":"3.5.9","cluster-version":"to_be_decided"}
	I0318 12:44:48.021860    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.373688Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	I0318 12:44:48.021860    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.375391Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0318 12:44:48.021860    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.375647Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0318 12:44:48.021860    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.375735Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0318 12:44:48.021860    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.377469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 switched to configuration voters=(8680198388102902480)"}
	I0318 12:44:48.021860    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.377568Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"31713adf8492fbc4","local-member-id":"78764271becab2d0","added-peer-id":"78764271becab2d0","added-peer-peer-urls":["https://172.25.151.112:2380"]}
	I0318 12:44:48.021860    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.378749Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"31713adf8492fbc4","local-member-id":"78764271becab2d0","cluster-version":"3.5"}
	I0318 12:44:48.021860    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.378942Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0318 12:44:48.022416    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.380244Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0318 12:44:48.022489    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.380886Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"78764271becab2d0","initial-advertise-peer-urls":["https://172.25.148.129:2380"],"listen-peer-urls":["https://172.25.148.129:2380"],"advertise-client-urls":["https://172.25.148.129:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.25.148.129:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0318 12:44:48.022489    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.383141Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.25.148.129:2380"}
	I0318 12:44:48.022489    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.383279Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.25.148.129:2380"}
	I0318 12:44:48.022489    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:26.393018Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0318 12:44:48.022582    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.621966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 is starting a new election at term 2"}
	I0318 12:44:48.022582    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.622399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 became pre-candidate at term 2"}
	I0318 12:44:48.022647    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.622624Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 received MsgPreVoteResp from 78764271becab2d0 at term 2"}
	I0318 12:44:48.022647    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.622825Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 became candidate at term 3"}
	I0318 12:44:48.022719    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.624231Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 received MsgVoteResp from 78764271becab2d0 at term 3"}
	I0318 12:44:48.022719    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.624426Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 became leader at term 3"}
	I0318 12:44:48.022719    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.624696Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 78764271becab2d0 elected leader 78764271becab2d0 at term 3"}
	I0318 12:44:48.022790    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.641347Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"78764271becab2d0","local-member-attributes":"{Name:multinode-642600 ClientURLs:[https://172.25.148.129:2379]}","request-path":"/0/members/78764271becab2d0/attributes","cluster-id":"31713adf8492fbc4","publish-timeout":"7s"}
	I0318 12:44:48.022790    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.641882Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0318 12:44:48.022841    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.64409Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0318 12:44:48.022894    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.644373Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0318 12:44:48.022894    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.641995Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0318 12:44:48.022931    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.650212Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.25.148.129:2379"}
	I0318 12:44:48.022968    5712 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T12:43:27.651053Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0318 12:44:48.032306    5712 logs.go:123] Gathering logs for kube-proxy [575b41a3a85a] ...
	I0318 12:44:48.032306    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 575b41a3a85a"
	I0318 12:44:48.097346    5712 command_runner.go:130] ! I0318 12:43:33.336778       1 server_others.go:69] "Using iptables proxy"
	I0318 12:44:48.097346    5712 command_runner.go:130] ! I0318 12:43:33.550433       1 node.go:141] Successfully retrieved node IP: 172.25.148.129
	I0318 12:44:48.097346    5712 command_runner.go:130] ! I0318 12:43:33.793084       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 12:44:48.097879    5712 command_runner.go:130] ! I0318 12:43:33.793109       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 12:44:48.097879    5712 command_runner.go:130] ! I0318 12:43:33.796954       1 server_others.go:152] "Using iptables Proxier"
	I0318 12:44:48.097879    5712 command_runner.go:130] ! I0318 12:43:33.798936       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 12:44:48.097879    5712 command_runner.go:130] ! I0318 12:43:33.800347       1 server.go:846] "Version info" version="v1.28.4"
	I0318 12:44:48.097948    5712 command_runner.go:130] ! I0318 12:43:33.800569       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:48.097948    5712 command_runner.go:130] ! I0318 12:43:33.803648       1 config.go:188] "Starting service config controller"
	I0318 12:44:48.097948    5712 command_runner.go:130] ! I0318 12:43:33.805156       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 12:44:48.097948    5712 command_runner.go:130] ! I0318 12:43:33.805421       1 config.go:97] "Starting endpoint slice config controller"
	I0318 12:44:48.098019    5712 command_runner.go:130] ! I0318 12:43:33.805584       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 12:44:48.098019    5712 command_runner.go:130] ! I0318 12:43:33.808628       1 config.go:315] "Starting node config controller"
	I0318 12:44:48.098019    5712 command_runner.go:130] ! I0318 12:43:33.808736       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 12:44:48.098019    5712 command_runner.go:130] ! I0318 12:43:33.905580       1 shared_informer.go:318] Caches are synced for service config
	I0318 12:44:48.098019    5712 command_runner.go:130] ! I0318 12:43:33.907041       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 12:44:48.098019    5712 command_runner.go:130] ! I0318 12:43:33.909416       1 shared_informer.go:318] Caches are synced for node config
	I0318 12:44:48.100561    5712 logs.go:123] Gathering logs for kube-controller-manager [a54be4436901] ...
	I0318 12:44:48.100617    5712 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54be4436901"
	I0318 12:44:48.137560    5712 command_runner.go:130] ! I0318 12:18:43.818653       1 serving.go:348] Generated self-signed cert in-memory
	I0318 12:44:48.139082    5712 command_runner.go:130] ! I0318 12:18:45.050029       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0318 12:44:48.139136    5712 command_runner.go:130] ! I0318 12:18:45.050365       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:44:48.139136    5712 command_runner.go:130] ! I0318 12:18:45.053707       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0318 12:44:48.139136    5712 command_runner.go:130] ! I0318 12:18:45.056733       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 12:44:48.139136    5712 command_runner.go:130] ! I0318 12:18:45.057073       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 12:44:48.139136    5712 command_runner.go:130] ! I0318 12:18:45.057232       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 12:44:48.139136    5712 command_runner.go:130] ! I0318 12:18:49.569825       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0318 12:44:48.139136    5712 command_runner.go:130] ! I0318 12:18:49.602388       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0318 12:44:48.139136    5712 command_runner.go:130] ! I0318 12:18:49.603663       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0318 12:44:48.139136    5712 command_runner.go:130] ! I0318 12:18:49.603680       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0318 12:44:48.139290    5712 command_runner.go:130] ! I0318 12:18:49.621364       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0318 12:44:48.139290    5712 command_runner.go:130] ! I0318 12:18:49.621624       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 12:44:48.139348    5712 command_runner.go:130] ! I0318 12:18:49.621432       1 controllermanager.go:642] "Started controller" controller="garbage-collector-controller"
	I0318 12:44:48.139348    5712 command_runner.go:130] ! I0318 12:18:49.622281       1 graph_builder.go:294] "Running" component="GraphBuilder"
	I0318 12:44:48.139348    5712 command_runner.go:130] ! I0318 12:18:49.644362       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I0318 12:44:48.139398    5712 command_runner.go:130] ! I0318 12:18:49.644758       1 stateful_set.go:161] "Starting stateful set controller"
	I0318 12:44:48.139398    5712 command_runner.go:130] ! I0318 12:18:49.646607       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0318 12:44:48.139437    5712 command_runner.go:130] ! I0318 12:18:49.660400       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0318 12:44:48.139461    5712 command_runner.go:130] ! I0318 12:18:49.661053       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0318 12:44:48.139461    5712 command_runner.go:130] ! I0318 12:18:49.670023       1 shared_informer.go:318] Caches are synced for tokens
	I0318 12:44:48.139507    5712 command_runner.go:130] ! I0318 12:18:49.679784       1 controllermanager.go:642] "Started controller" controller="persistentvolume-expander-controller"
	I0318 12:44:48.139507    5712 command_runner.go:130] ! I0318 12:18:49.680015       1 expand_controller.go:328] "Starting expand controller"
	I0318 12:44:48.139507    5712 command_runner.go:130] ! I0318 12:18:49.680028       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0318 12:44:48.139507    5712 command_runner.go:130] ! I0318 12:18:49.692925       1 controllermanager.go:642] "Started controller" controller="clusterrole-aggregation-controller"
	I0318 12:44:48.139575    5712 command_runner.go:130] ! I0318 12:18:49.693164       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0318 12:44:48.139575    5712 command_runner.go:130] ! I0318 12:18:49.693449       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0318 12:44:48.139575    5712 command_runner.go:130] ! I0318 12:18:49.727464       1 controllermanager.go:642] "Started controller" controller="serviceaccount-controller"
	I0318 12:44:48.139575    5712 command_runner.go:130] ! I0318 12:18:49.727835       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0318 12:44:48.139642    5712 command_runner.go:130] ! I0318 12:18:49.727848       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0318 12:44:48.139642    5712 command_runner.go:130] ! I0318 12:18:49.742409       1 controllermanager.go:642] "Started controller" controller="disruption-controller"
	I0318 12:44:48.139701    5712 command_runner.go:130] ! I0318 12:18:49.743029       1 disruption.go:433] "Sending events to api server."
	I0318 12:44:48.139701    5712 command_runner.go:130] ! I0318 12:18:49.743301       1 disruption.go:444] "Starting disruption controller"
	I0318 12:44:48.139701    5712 command_runner.go:130] ! I0318 12:18:49.743449       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0318 12:44:48.139701    5712 command_runner.go:130] ! I0318 12:18:49.759716       1 controllermanager.go:642] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0318 12:44:48.139766    5712 command_runner.go:130] ! I0318 12:18:49.760338       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0318 12:44:48.139766    5712 command_runner.go:130] ! I0318 12:18:49.760376       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0318 12:44:48.139766    5712 command_runner.go:130] ! I0318 12:18:49.829809       1 controllermanager.go:642] "Started controller" controller="endpointslice-mirroring-controller"
	I0318 12:44:48.139824    5712 command_runner.go:130] ! I0318 12:18:49.830343       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0318 12:44:48.139824    5712 command_runner.go:130] ! I0318 12:18:49.830415       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0318 12:44:48.139888    5712 command_runner.go:130] ! I0318 12:18:50.085725       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I0318 12:44:48.139888    5712 command_runner.go:130] ! I0318 12:18:50.086016       1 namespace_controller.go:197] "Starting namespace controller"
	I0318 12:44:48.139888    5712 command_runner.go:130] ! I0318 12:18:50.086167       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0318 12:44:48.139947    5712 command_runner.go:130] ! I0318 12:18:50.234974       1 controllermanager.go:642] "Started controller" controller="replicaset-controller"
	I0318 12:44:48.139947    5712 command_runner.go:130] ! I0318 12:18:50.242121       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0318 12:44:48.139947    5712 command_runner.go:130] ! I0318 12:18:50.242138       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0318 12:44:48.139947    5712 command_runner.go:130] ! I0318 12:18:50.384031       1 controllermanager.go:642] "Started controller" controller="token-cleaner-controller"
	I0318 12:44:48.140012    5712 command_runner.go:130] ! I0318 12:18:50.384090       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0318 12:44:48.140012    5712 command_runner.go:130] ! I0318 12:18:50.384100       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0318 12:44:48.140012    5712 command_runner.go:130] ! I0318 12:18:50.384108       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0318 12:44:48.140072    5712 command_runner.go:130] ! I0318 12:18:50.530182       1 controllermanager.go:642] "Started controller" controller="persistentvolume-protection-controller"
	I0318 12:44:48.140072    5712 command_runner.go:130] ! I0318 12:18:50.530258       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0318 12:44:48.140072    5712 command_runner.go:130] ! I0318 12:18:50.530267       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0318 12:44:48.140072    5712 command_runner.go:130] ! I0318 12:18:50.695232       1 controllermanager.go:642] "Started controller" controller="job-controller"
	I0318 12:44:48.140072    5712 command_runner.go:130] ! I0318 12:18:50.695351       1 job_controller.go:226] "Starting job controller"
	I0318 12:44:48.140137    5712 command_runner.go:130] ! I0318 12:18:50.695361       1 shared_informer.go:311] Waiting for caches to sync for job
	I0318 12:44:48.140137    5712 command_runner.go:130] ! I0318 12:18:50.833418       1 controllermanager.go:642] "Started controller" controller="deployment-controller"
	I0318 12:44:48.140137    5712 command_runner.go:130] ! I0318 12:18:50.833674       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0318 12:44:48.140195    5712 command_runner.go:130] ! I0318 12:18:50.833686       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0318 12:44:48.140195    5712 command_runner.go:130] ! I0318 12:18:50.998838       1 controllermanager.go:642] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0318 12:44:48.140195    5712 command_runner.go:130] ! I0318 12:18:50.999193       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0318 12:44:48.140258    5712 command_runner.go:130] ! I0318 12:18:50.999227       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0318 12:44:48.140258    5712 command_runner.go:130] ! I0318 12:18:51.141445       1 controllermanager.go:642] "Started controller" controller="ttl-after-finished-controller"
	I0318 12:44:48.140258    5712 command_runner.go:130] ! I0318 12:18:51.141508       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0318 12:44:48.140316    5712 command_runner.go:130] ! I0318 12:18:51.141518       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0318 12:44:48.140316    5712 command_runner.go:130] ! I0318 12:18:51.279642       1 controllermanager.go:642] "Started controller" controller="pod-garbage-collector-controller"
	I0318 12:44:48.140316    5712 command_runner.go:130] ! I0318 12:18:51.279728       1 gc_controller.go:101] "Starting GC controller"
	I0318 12:44:48.140667    5712 command_runner.go:130] ! I0318 12:18:51.279742       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0318 12:44:48.141156    5712 command_runner.go:130] ! I0318 12:18:51.429394       1 controllermanager.go:642] "Started controller" controller="cronjob-controller"
	I0318 12:44:48.141229    5712 command_runner.go:130] ! I0318 12:18:51.429600       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0318 12:44:48.141229    5712 command_runner.go:130] ! I0318 12:18:51.429612       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0318 12:44:48.141285    5712 command_runner.go:130] ! I0318 12:19:01.598915       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0318 12:44:48.141285    5712 command_runner.go:130] ! I0318 12:19:01.598966       1 controllermanager.go:642] "Started controller" controller="node-ipam-controller"
	I0318 12:44:48.141321    5712 command_runner.go:130] ! I0318 12:19:01.599163       1 node_ipam_controller.go:162] "Starting ipam controller"
	I0318 12:44:48.141321    5712 command_runner.go:130] ! I0318 12:19:01.599174       1 shared_informer.go:311] Waiting for caches to sync for node
	I0318 12:44:48.141321    5712 command_runner.go:130] ! I0318 12:19:01.601488       1 node_lifecycle_controller.go:431] "Controller will reconcile labels"
	I0318 12:44:48.141321    5712 command_runner.go:130] ! I0318 12:19:01.601803       1 controllermanager.go:642] "Started controller" controller="node-lifecycle-controller"
	I0318 12:44:48.141394    5712 command_runner.go:130] ! I0318 12:19:01.601987       1 node_lifecycle_controller.go:465] "Sending events to api server"
	I0318 12:44:48.141394    5712 command_runner.go:130] ! I0318 12:19:01.602013       1 node_lifecycle_controller.go:476] "Starting node controller"
	I0318 12:44:48.141394    5712 command_runner.go:130] ! I0318 12:19:01.602019       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0318 12:44:48.141460    5712 command_runner.go:130] ! I0318 12:19:01.623744       1 controllermanager.go:642] "Started controller" controller="persistentvolume-binder-controller"
	I0318 12:44:48.141460    5712 command_runner.go:130] ! I0318 12:19:01.624435       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0318 12:44:48.141460    5712 command_runner.go:130] ! I0318 12:19:01.624966       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0318 12:44:48.141530    5712 command_runner.go:130] ! I0318 12:19:01.663430       1 controllermanager.go:642] "Started controller" controller="endpointslice-controller"
	I0318 12:44:48.141530    5712 command_runner.go:130] ! I0318 12:19:01.663839       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0318 12:44:48.141530    5712 command_runner.go:130] ! I0318 12:19:01.663858       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0318 12:44:48.141600    5712 command_runner.go:130] ! I0318 12:19:01.710104       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0318 12:44:48.141600    5712 command_runner.go:130] ! I0318 12:19:01.710384       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0318 12:44:48.141600    5712 command_runner.go:130] ! I0318 12:19:01.710455       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0318 12:44:48.141600    5712 command_runner.go:130] ! I0318 12:19:01.710487       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0318 12:44:48.141681    5712 command_runner.go:130] ! I0318 12:19:01.710760       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0318 12:44:48.141681    5712 command_runner.go:130] ! I0318 12:19:01.710795       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0318 12:44:48.141681    5712 command_runner.go:130] ! I0318 12:19:01.710822       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0318 12:44:48.141739    5712 command_runner.go:130] ! I0318 12:19:01.710886       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0318 12:44:48.141739    5712 command_runner.go:130] ! I0318 12:19:01.710930       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0318 12:44:48.141780    5712 command_runner.go:130] ! I0318 12:19:01.710986       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0318 12:44:48.141780    5712 command_runner.go:130] ! I0318 12:19:01.711095       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0318 12:44:48.141780    5712 command_runner.go:130] ! I0318 12:19:01.711137       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0318 12:44:48.141877    5712 command_runner.go:130] ! I0318 12:19:01.711160       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0318 12:44:48.141877    5712 command_runner.go:130] ! I0318 12:19:01.711179       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0318 12:44:48.141877    5712 command_runner.go:130] ! I0318 12:19:01.711211       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0318 12:44:48.141877    5712 command_runner.go:130] ! I0318 12:19:01.711237       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0318 12:44:48.141953    5712 command_runner.go:130] ! I0318 12:19:01.711261       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0318 12:44:48.141953    5712 command_runner.go:130] ! I0318 12:19:01.711286       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0318 12:44:48.141953    5712 command_runner.go:130] ! I0318 12:19:01.711316       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0318 12:44:48.141953    5712 command_runner.go:130] ! I0318 12:19:01.711339       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0318 12:44:48.141953    5712 command_runner.go:130] ! I0318 12:19:01.711356       1 controllermanager.go:642] "Started controller" controller="resourcequota-controller"
	I0318 12:44:48.141953    5712 command_runner.go:130] ! I0318 12:19:01.711486       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0318 12:44:48.141953    5712 command_runner.go:130] ! I0318 12:19:01.711654       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 12:44:48.141953    5712 command_runner.go:130] ! I0318 12:19:01.711784       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0318 12:44:48.141953    5712 command_runner.go:130] ! I0318 12:19:01.715155       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0318 12:44:48.141953    5712 command_runner.go:130] ! I0318 12:19:01.715586       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0318 12:44:48.141953    5712 command_runner.go:130] ! I0318 12:19:01.715886       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0318 12:44:48.141953    5712 command_runner.go:130] ! I0318 12:19:01.732340       1 controllermanager.go:642] "Started controller" controller="ttl-controller"
	I0318 12:44:48.141953    5712 command_runner.go:130] ! I0318 12:19:01.732695       1 ttl_controller.go:124] "Starting TTL controller"
	I0318 12:44:48.141953    5712 command_runner.go:130] ! I0318 12:19:01.732944       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0318 12:44:48.141953    5712 command_runner.go:130] ! I0318 12:19:01.747011       1 controllermanager.go:642] "Started controller" controller="endpoints-controller"
	I0318 12:44:48.141953    5712 command_runner.go:130] ! I0318 12:19:01.747361       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0318 12:44:48.141953    5712 command_runner.go:130] ! I0318 12:19:01.747484       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0318 12:44:48.141953    5712 command_runner.go:130] ! E0318 12:19:01.771424       1 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0318 12:44:48.142344    5712 command_runner.go:130] ! I0318 12:19:01.771527       1 controllermanager.go:620] "Warning: skipping controller" controller="service-lb-controller"
	I0318 12:44:48.142428    5712 command_runner.go:130] ! I0318 12:19:01.771544       1 core.go:228] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0318 12:44:48.142428    5712 command_runner.go:130] ! I0318 12:19:01.772072       1 controllermanager.go:620] "Warning: skipping controller" controller="node-route-controller"
	I0318 12:44:48.142488    5712 command_runner.go:130] ! E0318 12:19:01.775461       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0318 12:44:48.143064    5712 command_runner.go:130] ! I0318 12:19:01.775656       1 controllermanager.go:620] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0318 12:44:48.143130    5712 command_runner.go:130] ! I0318 12:19:01.788795       1 controllermanager.go:642] "Started controller" controller="ephemeral-volume-controller"
	I0318 12:44:48.143130    5712 command_runner.go:130] ! I0318 12:19:01.789335       1 controller.go:169] "Starting ephemeral volume controller"
	I0318 12:44:48.143169    5712 command_runner.go:130] ! I0318 12:19:01.789368       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0318 12:44:48.143169    5712 command_runner.go:130] ! I0318 12:19:01.809091       1 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
	I0318 12:44:48.143216    5712 command_runner.go:130] ! I0318 12:19:01.809368       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0318 12:44:48.143216    5712 command_runner.go:130] ! I0318 12:19:01.809720       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0318 12:44:48.143216    5712 command_runner.go:130] ! I0318 12:19:01.846190       1 controllermanager.go:642] "Started controller" controller="daemonset-controller"
	I0318 12:44:48.143216    5712 command_runner.go:130] ! I0318 12:19:01.846779       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0318 12:44:48.143284    5712 command_runner.go:130] ! I0318 12:19:01.846879       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0318 12:44:48.143284    5712 command_runner.go:130] ! I0318 12:19:02.137994       1 controllermanager.go:642] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0318 12:44:48.144098    5712 command_runner.go:130] ! I0318 12:19:02.138059       1 horizontal.go:200] "Starting HPA controller"
	I0318 12:44:48.144168    5712 command_runner.go:130] ! I0318 12:19:02.138069       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0318 12:44:48.144268    5712 command_runner.go:130] ! I0318 12:19:02.189502       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0318 12:44:48.144268    5712 command_runner.go:130] ! I0318 12:19:02.189864       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0318 12:44:48.144268    5712 command_runner.go:130] ! I0318 12:19:02.190041       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:48.144268    5712 command_runner.go:130] ! I0318 12:19:02.191172       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0318 12:44:48.144841    5712 command_runner.go:130] ! I0318 12:19:02.191256       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0318 12:44:48.144928    5712 command_runner.go:130] ! I0318 12:19:02.191347       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:48.144971    5712 command_runner.go:130] ! I0318 12:19:02.193057       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0318 12:44:48.144971    5712 command_runner.go:130] ! I0318 12:19:02.193152       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0318 12:44:48.144971    5712 command_runner.go:130] ! I0318 12:19:02.193246       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:48.145107    5712 command_runner.go:130] ! I0318 12:19:02.194807       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0318 12:44:48.145107    5712 command_runner.go:130] ! I0318 12:19:02.194851       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0318 12:44:48.145107    5712 command_runner.go:130] ! I0318 12:19:02.195648       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0318 12:44:48.145168    5712 command_runner.go:130] ! I0318 12:19:02.194886       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 12:44:48.145168    5712 command_runner.go:130] ! I0318 12:19:02.345061       1 controllermanager.go:642] "Started controller" controller="bootstrap-signer-controller"
	I0318 12:44:48.145168    5712 command_runner.go:130] ! I0318 12:19:02.347311       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0318 12:44:48.145168    5712 command_runner.go:130] ! I0318 12:19:02.364524       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 12:44:48.145168    5712 command_runner.go:130] ! I0318 12:19:02.380069       1 shared_informer.go:318] Caches are synced for expand
	I0318 12:44:48.145306    5712 command_runner.go:130] ! I0318 12:19:02.390503       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0318 12:44:48.145306    5712 command_runner.go:130] ! I0318 12:19:02.391317       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0318 12:44:48.145306    5712 command_runner.go:130] ! I0318 12:19:02.393201       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0318 12:44:48.145377    5712 command_runner.go:130] ! I0318 12:19:02.402532       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0318 12:44:48.145377    5712 command_runner.go:130] ! I0318 12:19:02.419971       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0318 12:44:48.145445    5712 command_runner.go:130] ! I0318 12:19:02.421082       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600\" does not exist"
	I0318 12:44:48.145445    5712 command_runner.go:130] ! I0318 12:19:02.427201       1 shared_informer.go:318] Caches are synced for persistent volume
	I0318 12:44:48.145445    5712 command_runner.go:130] ! I0318 12:19:02.427876       1 shared_informer.go:318] Caches are synced for service account
	I0318 12:44:48.145445    5712 command_runner.go:130] ! I0318 12:19:02.429003       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 12:44:48.145547    5712 command_runner.go:130] ! I0318 12:19:02.429629       1 shared_informer.go:318] Caches are synced for cronjob
	I0318 12:44:48.145547    5712 command_runner.go:130] ! I0318 12:19:02.430311       1 shared_informer.go:318] Caches are synced for PV protection
	I0318 12:44:48.145547    5712 command_runner.go:130] ! I0318 12:19:02.432115       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0318 12:44:48.145547    5712 command_runner.go:130] ! I0318 12:19:02.434603       1 shared_informer.go:318] Caches are synced for TTL
	I0318 12:44:48.145547    5712 command_runner.go:130] ! I0318 12:19:02.437362       1 shared_informer.go:318] Caches are synced for deployment
	I0318 12:44:48.145547    5712 command_runner.go:130] ! I0318 12:19:02.438306       1 shared_informer.go:318] Caches are synced for HPA
	I0318 12:44:48.145547    5712 command_runner.go:130] ! I0318 12:19:02.441785       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0318 12:44:48.145547    5712 command_runner.go:130] ! I0318 12:19:02.442916       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0318 12:44:48.145547    5712 command_runner.go:130] ! I0318 12:19:02.444302       1 shared_informer.go:318] Caches are synced for disruption
	I0318 12:44:48.145679    5712 command_runner.go:130] ! I0318 12:19:02.447137       1 shared_informer.go:318] Caches are synced for daemon sets
	I0318 12:44:48.145679    5712 command_runner.go:130] ! I0318 12:19:02.447694       1 shared_informer.go:318] Caches are synced for endpoint
	I0318 12:44:48.145739    5712 command_runner.go:130] ! I0318 12:19:02.452098       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0318 12:44:48.145739    5712 command_runner.go:130] ! I0318 12:19:02.454023       1 shared_informer.go:318] Caches are synced for stateful set
	I0318 12:44:48.145790    5712 command_runner.go:130] ! I0318 12:19:02.461158       1 shared_informer.go:318] Caches are synced for crt configmap
	I0318 12:44:48.145855    5712 command_runner.go:130] ! I0318 12:19:02.464623       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0318 12:44:48.145855    5712 command_runner.go:130] ! I0318 12:19:02.480847       1 shared_informer.go:318] Caches are synced for GC
	I0318 12:44:48.145855    5712 command_runner.go:130] ! I0318 12:19:02.487772       1 shared_informer.go:318] Caches are synced for namespace
	I0318 12:44:48.145855    5712 command_runner.go:130] ! I0318 12:19:02.490082       1 shared_informer.go:318] Caches are synced for ephemeral
	I0318 12:44:48.145963    5712 command_runner.go:130] ! I0318 12:19:02.494160       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0318 12:44:48.145963    5712 command_runner.go:130] ! I0318 12:19:02.499312       1 shared_informer.go:318] Caches are synced for node
	I0318 12:44:48.145963    5712 command_runner.go:130] ! I0318 12:19:02.499587       1 range_allocator.go:174] "Sending events to api server"
	I0318 12:44:48.145963    5712 command_runner.go:130] ! I0318 12:19:02.499772       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0318 12:44:48.145963    5712 command_runner.go:130] ! I0318 12:19:02.500365       1 shared_informer.go:318] Caches are synced for attach detach
	I0318 12:44:48.145963    5712 command_runner.go:130] ! I0318 12:19:02.500954       1 shared_informer.go:318] Caches are synced for job
	I0318 12:44:48.145963    5712 command_runner.go:130] ! I0318 12:19:02.501438       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0318 12:44:48.145963    5712 command_runner.go:130] ! I0318 12:19:02.501724       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0318 12:44:48.145963    5712 command_runner.go:130] ! I0318 12:19:02.503931       1 shared_informer.go:318] Caches are synced for PVC protection
	I0318 12:44:48.145963    5712 command_runner.go:130] ! I0318 12:19:02.509883       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0318 12:44:48.145963    5712 command_runner.go:130] ! I0318 12:19:02.528934       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-642600" podCIDRs=["10.244.0.0/24"]
	I0318 12:44:48.146129    5712 command_runner.go:130] ! I0318 12:19:02.565942       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 12:44:48.146129    5712 command_runner.go:130] ! I0318 12:19:02.603468       1 shared_informer.go:318] Caches are synced for taint
	I0318 12:44:48.146129    5712 command_runner.go:130] ! I0318 12:19:02.603627       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0318 12:44:48.146129    5712 command_runner.go:130] ! I0318 12:19:02.603721       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-642600"
	I0318 12:44:48.146129    5712 command_runner.go:130] ! I0318 12:19:02.603760       1 node_lifecycle_controller.go:1029] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0318 12:44:48.146129    5712 command_runner.go:130] ! I0318 12:19:02.603782       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0318 12:44:48.146129    5712 command_runner.go:130] ! I0318 12:19:02.603821       1 taint_manager.go:210] "Sending events to api server"
	I0318 12:44:48.146129    5712 command_runner.go:130] ! I0318 12:19:02.605481       1 event.go:307] "Event occurred" object="multinode-642600" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600 event: Registered Node multinode-642600 in Controller"
	I0318 12:44:48.146129    5712 command_runner.go:130] ! I0318 12:19:02.613688       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 12:44:48.146281    5712 command_runner.go:130] ! I0318 12:19:02.644197       1 event.go:307] "Event occurred" object="kube-system/etcd-multinode-642600" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:48.146281    5712 command_runner.go:130] ! I0318 12:19:02.675188       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-multinode-642600" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:48.146281    5712 command_runner.go:130] ! I0318 12:19:02.675510       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-multinode-642600" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:48.146281    5712 command_runner.go:130] ! I0318 12:19:02.681286       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-multinode-642600" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:48.146281    5712 command_runner.go:130] ! I0318 12:19:03.023915       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 12:44:48.146428    5712 command_runner.go:130] ! I0318 12:19:03.023946       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0318 12:44:48.146428    5712 command_runner.go:130] ! I0318 12:19:03.029139       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 12:44:48.146428    5712 command_runner.go:130] ! I0318 12:19:03.075135       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I0318 12:44:48.146428    5712 command_runner.go:130] ! I0318 12:19:03.175071       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-kpt4f"
	I0318 12:44:48.146428    5712 command_runner.go:130] ! I0318 12:19:03.181384       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-4dg79"
	I0318 12:44:48.146428    5712 command_runner.go:130] ! I0318 12:19:03.624405       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-fgn7v"
	I0318 12:44:48.146571    5712 command_runner.go:130] ! I0318 12:19:03.691902       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-xkgdt"
	I0318 12:44:48.146571    5712 command_runner.go:130] ! I0318 12:19:03.810454       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="734.97569ms"
	I0318 12:44:48.146571    5712 command_runner.go:130] ! I0318 12:19:03.847906       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="37.087083ms"
	I0318 12:44:48.146571    5712 command_runner.go:130] ! I0318 12:19:03.945758       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="97.729709ms"
	I0318 12:44:48.146571    5712 command_runner.go:130] ! I0318 12:19:03.945958       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="107.501µs"
	I0318 12:44:48.146696    5712 command_runner.go:130] ! I0318 12:19:04.640409       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0318 12:44:48.146696    5712 command_runner.go:130] ! I0318 12:19:04.732241       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-xkgdt"
	I0318 12:44:48.146696    5712 command_runner.go:130] ! I0318 12:19:04.763359       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="121.567183ms"
	I0318 12:44:48.146767    5712 command_runner.go:130] ! I0318 12:19:04.828298       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="64.870031ms"
	I0318 12:44:48.146767    5712 command_runner.go:130] ! I0318 12:19:04.890459       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.083804ms"
	I0318 12:44:48.146767    5712 command_runner.go:130] ! I0318 12:19:04.890764       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.4µs"
	I0318 12:44:48.146822    5712 command_runner.go:130] ! I0318 12:19:15.938090       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="157.9µs"
	I0318 12:44:48.146822    5712 command_runner.go:130] ! I0318 12:19:15.982953       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="121.301µs"
	I0318 12:44:48.146901    5712 command_runner.go:130] ! I0318 12:19:17.607464       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0318 12:44:48.146901    5712 command_runner.go:130] ! I0318 12:19:19.208242       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="102.7µs"
	I0318 12:44:48.146901    5712 command_runner.go:130] ! I0318 12:19:19.274086       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="22.124146ms"
	I0318 12:44:48.146901    5712 command_runner.go:130] ! I0318 12:19:19.275145       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="211.9µs"
	I0318 12:44:48.146901    5712 command_runner.go:130] ! I0318 12:22:12.652722       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600-m02\" does not exist"
	I0318 12:44:48.146901    5712 command_runner.go:130] ! I0318 12:22:12.679760       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-642600-m02" podCIDRs=["10.244.1.0/24"]
	I0318 12:44:48.146901    5712 command_runner.go:130] ! I0318 12:22:12.706735       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-d5llj"
	I0318 12:44:48.146901    5712 command_runner.go:130] ! I0318 12:22:12.706774       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-vts9f"
	I0318 12:44:48.146901    5712 command_runner.go:130] ! I0318 12:22:17.642129       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-642600-m02"
	I0318 12:44:48.146901    5712 command_runner.go:130] ! I0318 12:22:17.642212       1 event.go:307] "Event occurred" object="multinode-642600-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600-m02 event: Registered Node multinode-642600-m02 in Controller"
	I0318 12:44:48.146901    5712 command_runner.go:130] ! I0318 12:22:34.263318       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:48.146901    5712 command_runner.go:130] ! I0318 12:23:01.851486       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5b5d89c9d6 to 2"
	I0318 12:44:48.146901    5712 command_runner.go:130] ! I0318 12:23:01.881281       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-hmhdf"
	I0318 12:44:48.146901    5712 command_runner.go:130] ! I0318 12:23:01.924301       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-48qkw"
	I0318 12:44:48.146901    5712 command_runner.go:130] ! I0318 12:23:01.946058       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="91.676064ms"
	I0318 12:44:48.146901    5712 command_runner.go:130] ! I0318 12:23:02.049702       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="103.251772ms"
	I0318 12:44:48.146901    5712 command_runner.go:130] ! I0318 12:23:02.049789       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="35.4µs"
	I0318 12:44:48.146901    5712 command_runner.go:130] ! I0318 12:23:04.783277       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="15.030749ms"
	I0318 12:44:48.146901    5712 command_runner.go:130] ! I0318 12:23:04.783520       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="39.9µs"
	I0318 12:44:48.146901    5712 command_runner.go:130] ! I0318 12:23:05.441638       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="14.350047ms"
	I0318 12:44:48.146901    5712 command_runner.go:130] ! I0318 12:23:05.441876       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="105µs"
	I0318 12:44:48.146901    5712 command_runner.go:130] ! I0318 12:27:09.073772       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600-m03\" does not exist"
	I0318 12:44:48.147469    5712 command_runner.go:130] ! I0318 12:27:09.075345       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:48.147469    5712 command_runner.go:130] ! I0318 12:27:09.095707       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-642600-m03" podCIDRs=["10.244.2.0/24"]
	I0318 12:44:48.147469    5712 command_runner.go:130] ! I0318 12:27:09.110695       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-khbjt"
	I0318 12:44:48.147469    5712 command_runner.go:130] ! I0318 12:27:09.110730       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-thkjp"
	I0318 12:44:48.147469    5712 command_runner.go:130] ! I0318 12:27:12.715112       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-642600-m03"
	I0318 12:44:48.147469    5712 command_runner.go:130] ! I0318 12:27:12.715611       1 event.go:307] "Event occurred" object="multinode-642600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600-m03 event: Registered Node multinode-642600-m03 in Controller"
	I0318 12:44:48.147469    5712 command_runner.go:130] ! I0318 12:27:30.856729       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:48.147582    5712 command_runner.go:130] ! I0318 12:35:52.853028       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:48.147582    5712 command_runner.go:130] ! I0318 12:35:52.854041       1 event.go:307] "Event occurred" object="multinode-642600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-642600-m03 status is now: NodeNotReady"
	I0318 12:44:48.147582    5712 command_runner.go:130] ! I0318 12:35:52.871920       1 event.go:307] "Event occurred" object="kube-system/kindnet-thkjp" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:48.147582    5712 command_runner.go:130] ! I0318 12:35:52.891158       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-khbjt" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:48.147582    5712 command_runner.go:130] ! I0318 12:38:40.101072       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:48.147582    5712 command_runner.go:130] ! I0318 12:38:42.930337       1 event.go:307] "Event occurred" object="multinode-642600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-642600-m03 event: Removing Node multinode-642600-m03 from Controller"
	I0318 12:44:48.147582    5712 command_runner.go:130] ! I0318 12:38:46.825246       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:48.147582    5712 command_runner.go:130] ! I0318 12:38:46.827225       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600-m03\" does not exist"
	I0318 12:44:48.147582    5712 command_runner.go:130] ! I0318 12:38:46.865011       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-642600-m03" podCIDRs=["10.244.3.0/24"]
	I0318 12:44:48.147582    5712 command_runner.go:130] ! I0318 12:38:47.931681       1 event.go:307] "Event occurred" object="multinode-642600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600-m03 event: Registered Node multinode-642600-m03 in Controller"
	I0318 12:44:48.147582    5712 command_runner.go:130] ! I0318 12:38:52.975724       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:48.147582    5712 command_runner.go:130] ! I0318 12:40:33.280094       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:48.147582    5712 command_runner.go:130] ! I0318 12:40:33.281180       1 event.go:307] "Event occurred" object="multinode-642600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-642600-m03 status is now: NodeNotReady"
	I0318 12:44:48.147582    5712 command_runner.go:130] ! I0318 12:40:33.601041       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-khbjt" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:48.147582    5712 command_runner.go:130] ! I0318 12:40:33.698293       1 event.go:307] "Event occurred" object="kube-system/kindnet-thkjp" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:50.685337    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods
	I0318 12:44:50.685337    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:50.685337    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:50.685337    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:50.699158    5712 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0318 12:44:50.699253    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:50.699253    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:50 GMT
	I0318 12:44:50.699253    5712 round_trippers.go:580]     Audit-Id: 8452c9cf-f9ad-4c44-a283-8c14bef22ec7
	I0318 12:44:50.699253    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:50.699253    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:50.699253    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:50.699253    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:50.701336    5712 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2068"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"2054","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83063 chars]
	I0318 12:44:50.705245    5712 system_pods.go:59] 12 kube-system pods found
	I0318 12:44:50.705245    5712 system_pods.go:61] "coredns-5dd5756b68-fgn7v" [7bc52797-b4bd-4046-b3d5-fae9c8ccd13b] Running
	I0318 12:44:50.705245    5712 system_pods.go:61] "etcd-multinode-642600" [6f0ca14e-af4b-4442-8a48-28b69c699976] Running
	I0318 12:44:50.705245    5712 system_pods.go:61] "kindnet-d5llj" [caa4170d-6120-414a-950c-92a0380a70b8] Running
	I0318 12:44:50.705245    5712 system_pods.go:61] "kindnet-kpt4f" [acd9d7a0-0e27-4bbb-8562-6fbf374742ca] Running
	I0318 12:44:50.705245    5712 system_pods.go:61] "kindnet-thkjp" [a7e20c36-c1d1-4146-a66c-40448e1ae0e5] Running
	I0318 12:44:50.705245    5712 system_pods.go:61] "kube-apiserver-multinode-642600" [ab8e6b8b-cbac-4c90-8f57-9af2760ced9c] Running
	I0318 12:44:50.705245    5712 system_pods.go:61] "kube-controller-manager-multinode-642600" [1dd2a576-c5a0-44e5-b194-545e8b18962c] Running
	I0318 12:44:50.705245    5712 system_pods.go:61] "kube-proxy-4dg79" [449242c2-ad12-4da5-b339-3be7ab8a9b16] Running
	I0318 12:44:50.705245    5712 system_pods.go:61] "kube-proxy-khbjt" [594efa46-7e30-40e6-92dd-9c9c80bc787a] Running
	I0318 12:44:50.705245    5712 system_pods.go:61] "kube-proxy-vts9f" [9545be8f-07fd-49dd-99bd-e9e976e65e7b] Running
	I0318 12:44:50.705245    5712 system_pods.go:61] "kube-scheduler-multinode-642600" [52e29d3b-d6e9-4109-916d-74123a2ab190] Running
	I0318 12:44:50.705245    5712 system_pods.go:61] "storage-provisioner" [d2718b8a-26a9-4c86-bf9a-221d1ee23ceb] Running
	I0318 12:44:50.705245    5712 system_pods.go:74] duration metric: took 4.0124068s to wait for pod list to return data ...
	I0318 12:44:50.705245    5712 default_sa.go:34] waiting for default service account to be created ...
	I0318 12:44:50.706017    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/default/serviceaccounts
	I0318 12:44:50.706081    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:50.706081    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:50.706081    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:50.710852    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:50.710852    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:50.711687    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:50.711687    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:50.711687    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:50.711729    5712 round_trippers.go:580]     Content-Length: 262
	I0318 12:44:50.711729    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:50 GMT
	I0318 12:44:50.711729    5712 round_trippers.go:580]     Audit-Id: 8dae92a1-618c-4624-b23a-459299dcdc55
	I0318 12:44:50.711729    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:50.711729    5712 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"2068"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"cb0307d5-001e-4a17-89ea-7a5b4f2963cc","resourceVersion":"344","creationTimestamp":"2024-03-18T12:19:02Z"}}]}
	I0318 12:44:50.712102    5712 default_sa.go:45] found service account: "default"
	I0318 12:44:50.712134    5712 default_sa.go:55] duration metric: took 6.3425ms for default service account to be created ...
	I0318 12:44:50.712134    5712 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 12:44:50.712209    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/namespaces/kube-system/pods
	I0318 12:44:50.712270    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:50.712270    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:50.712270    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:50.721267    5712 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 12:44:50.721267    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:50.721267    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:50.721267    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:50.721267    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:50 GMT
	I0318 12:44:50.721267    5712 round_trippers.go:580]     Audit-Id: 772c77a3-4597-4f1c-8e4b-46abbe90e2a2
	I0318 12:44:50.721267    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:50.721267    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:50.722618    5712 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2068"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fgn7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7bc52797-b4bd-4046-b3d5-fae9c8ccd13b","resourceVersion":"2054","creationTimestamp":"2024-03-18T12:19:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e3721eec-54e8-492c-8372-43f3415021f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:19:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3721eec-54e8-492c-8372-43f3415021f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83063 chars]
	I0318 12:44:50.726985    5712 system_pods.go:86] 12 kube-system pods found
	I0318 12:44:50.726985    5712 system_pods.go:89] "coredns-5dd5756b68-fgn7v" [7bc52797-b4bd-4046-b3d5-fae9c8ccd13b] Running
	I0318 12:44:50.726985    5712 system_pods.go:89] "etcd-multinode-642600" [6f0ca14e-af4b-4442-8a48-28b69c699976] Running
	I0318 12:44:50.726985    5712 system_pods.go:89] "kindnet-d5llj" [caa4170d-6120-414a-950c-92a0380a70b8] Running
	I0318 12:44:50.726985    5712 system_pods.go:89] "kindnet-kpt4f" [acd9d7a0-0e27-4bbb-8562-6fbf374742ca] Running
	I0318 12:44:50.726985    5712 system_pods.go:89] "kindnet-thkjp" [a7e20c36-c1d1-4146-a66c-40448e1ae0e5] Running
	I0318 12:44:50.726985    5712 system_pods.go:89] "kube-apiserver-multinode-642600" [ab8e6b8b-cbac-4c90-8f57-9af2760ced9c] Running
	I0318 12:44:50.726985    5712 system_pods.go:89] "kube-controller-manager-multinode-642600" [1dd2a576-c5a0-44e5-b194-545e8b18962c] Running
	I0318 12:44:50.726985    5712 system_pods.go:89] "kube-proxy-4dg79" [449242c2-ad12-4da5-b339-3be7ab8a9b16] Running
	I0318 12:44:50.726985    5712 system_pods.go:89] "kube-proxy-khbjt" [594efa46-7e30-40e6-92dd-9c9c80bc787a] Running
	I0318 12:44:50.726985    5712 system_pods.go:89] "kube-proxy-vts9f" [9545be8f-07fd-49dd-99bd-e9e976e65e7b] Running
	I0318 12:44:50.726985    5712 system_pods.go:89] "kube-scheduler-multinode-642600" [52e29d3b-d6e9-4109-916d-74123a2ab190] Running
	I0318 12:44:50.726985    5712 system_pods.go:89] "storage-provisioner" [d2718b8a-26a9-4c86-bf9a-221d1ee23ceb] Running
	I0318 12:44:50.726985    5712 system_pods.go:126] duration metric: took 14.8503ms to wait for k8s-apps to be running ...
	I0318 12:44:50.726985    5712 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 12:44:50.739844    5712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:44:50.769823    5712 system_svc.go:56] duration metric: took 42.8377ms WaitForService to wait for kubelet
	I0318 12:44:50.770286    5712 kubeadm.go:576] duration metric: took 1m14.5353016s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 12:44:50.770355    5712 node_conditions.go:102] verifying NodePressure condition ...
	I0318 12:44:50.770491    5712 round_trippers.go:463] GET https://172.25.148.129:8443/api/v1/nodes
	I0318 12:44:50.770491    5712 round_trippers.go:469] Request Headers:
	I0318 12:44:50.770576    5712 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:50.770576    5712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:44:50.774846    5712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:50.774846    5712 round_trippers.go:577] Response Headers:
	I0318 12:44:50.774846    5712 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:44:50.774846    5712 round_trippers.go:580]     Content-Type: application/json
	I0318 12:44:50.774846    5712 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c0aa306-964c-4772-9d80-f0b0e07bc16c
	I0318 12:44:50.774846    5712 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ed36492-6c15-4295-8a95-9853ba511359
	I0318 12:44:50.774846    5712 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:44:50 GMT
	I0318 12:44:50.774846    5712 round_trippers.go:580]     Audit-Id: 051a936f-32d5-465f-8a31-b7014848ef69
	I0318 12:44:50.774846    5712 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2068"},"items":[{"metadata":{"name":"multinode-642600","uid":"a400b04c-d28e-4532-a099-0ceec2b54e04","resourceVersion":"2015","creationTimestamp":"2024-03-18T12:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-642600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1eefae2ca5be099618e61623fc53c8bbe0e383fd","minikube.k8s.io/name":"multinode-642600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_18_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16259 chars]
	I0318 12:44:50.776623    5712 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 12:44:50.776685    5712 node_conditions.go:123] node cpu capacity is 2
	I0318 12:44:50.776718    5712 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 12:44:50.776718    5712 node_conditions.go:123] node cpu capacity is 2
	I0318 12:44:50.776718    5712 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 12:44:50.776718    5712 node_conditions.go:123] node cpu capacity is 2
	I0318 12:44:50.776718    5712 node_conditions.go:105] duration metric: took 6.3638ms to run NodePressure ...
	I0318 12:44:50.776778    5712 start.go:240] waiting for startup goroutines ...
	I0318 12:44:50.776778    5712 start.go:245] waiting for cluster config update ...
	I0318 12:44:50.776778    5712 start.go:254] writing updated cluster config ...
	I0318 12:44:50.780972    5712 out.go:177] 
	I0318 12:44:50.783783    5712 config.go:182] Loaded profile config "ha-606900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 12:44:50.796808    5712 config.go:182] Loaded profile config "multinode-642600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 12:44:50.796808    5712 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\config.json ...
	I0318 12:44:50.804019    5712 out.go:177] * Starting "multinode-642600-m02" worker node in "multinode-642600" cluster
	I0318 12:44:50.806600    5712 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 12:44:50.806600    5712 cache.go:56] Caching tarball of preloaded images
	I0318 12:44:50.806600    5712 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0318 12:44:50.806600    5712 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 12:44:50.806600    5712 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-642600\config.json ...
	I0318 12:44:50.809297    5712 start.go:360] acquireMachinesLock for multinode-642600-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 12:44:50.810322    5712 start.go:364] duration metric: took 1.0247ms to acquireMachinesLock for "multinode-642600-m02"
	I0318 12:44:50.810622    5712 start.go:96] Skipping create...Using existing machine configuration
	I0318 12:44:50.810622    5712 fix.go:54] fixHost starting: m02
	I0318 12:44:50.810897    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600-m02 ).state
	I0318 12:44:53.078575    5712 main.go:141] libmachine: [stdout =====>] : Off
	
	I0318 12:44:53.078575    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:44:53.078706    5712 fix.go:112] recreateIfNeeded on multinode-642600-m02: state=Stopped err=<nil>
	W0318 12:44:53.078706    5712 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 12:44:53.082461    5712 out.go:177] * Restarting existing hyperv VM for "multinode-642600-m02" ...
	I0318 12:44:53.086092    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-642600-m02
	I0318 12:44:56.372131    5712 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:44:56.372131    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:44:56.372131    5712 main.go:141] libmachine: Waiting for host to start...
	I0318 12:44:56.372131    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600-m02 ).state
	I0318 12:44:58.726364    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:44:58.726766    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:44:58.726859    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:45:01.322171    5712 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:45:01.323088    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:02.332172    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600-m02 ).state
	I0318 12:45:04.660156    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:45:04.660156    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:04.660156    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:45:07.359664    5712 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:45:07.359664    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:08.370665    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600-m02 ).state
	I0318 12:45:10.686002    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:45:10.686135    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:10.686238    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:45:13.334579    5712 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:45:13.335669    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:14.342955    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600-m02 ).state
	I0318 12:45:16.674409    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:45:16.674409    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:16.674558    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:45:19.368002    5712 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:45:19.368002    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:20.382337    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600-m02 ).state
	I0318 12:45:22.700522    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:45:22.701157    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:22.701157    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:45:25.463389    5712 main.go:141] libmachine: [stdout =====>] : 172.25.144.186
	
	I0318 12:45:25.463389    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:25.467285    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600-m02 ).state
	I0318 12:45:27.715593    5712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:45:27.715593    5712 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:27.716525    5712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600-m02 ).networkadapters[0]).ipaddresses[0]
	
	
	==> Docker <==
	Mar 18 12:44:40 multinode-642600 dockerd[1042]: 2024/03/18 12:44:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 12:44:43 multinode-642600 dockerd[1042]: 2024/03/18 12:44:43 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 12:44:43 multinode-642600 dockerd[1042]: 2024/03/18 12:44:43 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 12:44:43 multinode-642600 dockerd[1042]: 2024/03/18 12:44:43 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 12:44:43 multinode-642600 dockerd[1042]: 2024/03/18 12:44:43 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 12:44:43 multinode-642600 dockerd[1042]: 2024/03/18 12:44:43 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 12:44:43 multinode-642600 dockerd[1042]: 2024/03/18 12:44:43 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 12:44:43 multinode-642600 dockerd[1042]: 2024/03/18 12:44:43 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 12:44:43 multinode-642600 dockerd[1042]: 2024/03/18 12:44:43 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 12:44:44 multinode-642600 dockerd[1042]: 2024/03/18 12:44:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 12:44:44 multinode-642600 dockerd[1042]: 2024/03/18 12:44:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 12:44:44 multinode-642600 dockerd[1042]: 2024/03/18 12:44:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 12:44:44 multinode-642600 dockerd[1042]: 2024/03/18 12:44:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 12:44:47 multinode-642600 dockerd[1042]: 2024/03/18 12:44:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 12:44:47 multinode-642600 dockerd[1042]: 2024/03/18 12:44:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 12:44:47 multinode-642600 dockerd[1042]: 2024/03/18 12:44:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 12:44:47 multinode-642600 dockerd[1042]: 2024/03/18 12:44:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 12:44:47 multinode-642600 dockerd[1042]: 2024/03/18 12:44:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 12:44:47 multinode-642600 dockerd[1042]: 2024/03/18 12:44:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 12:44:47 multinode-642600 dockerd[1042]: 2024/03/18 12:44:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 12:44:47 multinode-642600 dockerd[1042]: 2024/03/18 12:44:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 12:44:47 multinode-642600 dockerd[1042]: 2024/03/18 12:44:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 12:44:48 multinode-642600 dockerd[1042]: 2024/03/18 12:44:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 12:44:48 multinode-642600 dockerd[1042]: 2024/03/18 12:44:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 12:44:48 multinode-642600 dockerd[1042]: 2024/03/18 12:44:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	566e40ce923f7       8c811b4aec35f                                                                                         About a minute ago   Running             busybox                   1                   e1b2432b0ed66       busybox-5b5d89c9d6-48qkw
	fcf17db92b351       ead0a4a53df89                                                                                         About a minute ago   Running             coredns                   1                   1090dd5740980       coredns-5dd5756b68-fgn7v
	4652c26c0904e       6e38f40d628db                                                                                         About a minute ago   Running             storage-provisioner       2                   889c16eb0ab73       storage-provisioner
	9fec05a61d2a9       4950bb10b3f87                                                                                         2 minutes ago        Running             kindnet-cni               1                   5ecbdcbdad3fa       kindnet-kpt4f
	787ade2ea2cd0       6e38f40d628db                                                                                         2 minutes ago        Exited              storage-provisioner       1                   889c16eb0ab73       storage-provisioner
	575b41a3a85a4       83f6cc407eed8                                                                                         2 minutes ago        Running             kube-proxy                1                   7a2f0ccaf5c4c       kube-proxy-4dg79
	a48a6d310b868       7fe0e6f37db33                                                                                         2 minutes ago        Running             kube-apiserver            0                   a7281d6e698ea       kube-apiserver-multinode-642600
	14ae9398d33b1       d058aa5ab969c                                                                                         2 minutes ago        Running             kube-controller-manager   1                   eca6768355c74       kube-controller-manager-multinode-642600
	bd1e4f4d262e3       e3db313c6dbc0                                                                                         2 minutes ago        Running             kube-scheduler            1                   f62197122538f       kube-scheduler-multinode-642600
	8e7911b58c587       73deb9a3f7025                                                                                         2 minutes ago        Running             etcd                      0                   67004ee038ee4       etcd-multinode-642600
	a8dd2eacb7251       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   22 minutes ago       Exited              busybox                   0                   29bb4d534c2e2       busybox-5b5d89c9d6-48qkw
	e81f1d2fdb360       ead0a4a53df89                                                                                         26 minutes ago       Exited              coredns                   0                   ed38da653fbef       coredns-5dd5756b68-fgn7v
	5cf42651cb21d       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              26 minutes ago       Exited              kindnet-cni               0                   fef37141be6db       kindnet-kpt4f
	4bbad08fe59ac       83f6cc407eed8                                                                                         26 minutes ago       Exited              kube-proxy                0                   2f4709a3a45a4       kube-proxy-4dg79
	a54be44369019       d058aa5ab969c                                                                                         27 minutes ago       Exited              kube-controller-manager   0                   d766c4514f0bf       kube-controller-manager-multinode-642600
	47777d4c0b90d       e3db313c6dbc0                                                                                         27 minutes ago       Exited              kube-scheduler            0                   3500a9f1ca84e       kube-scheduler-multinode-642600
	
	
	==> coredns [e81f1d2fdb36] <==
	[INFO] 10.244.1.2:43523 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001489s
	[INFO] 10.244.1.2:47882 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001346s
	[INFO] 10.244.1.2:38222 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000057401s
	[INFO] 10.244.1.2:49068 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001253s
	[INFO] 10.244.1.2:35375 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000582s
	[INFO] 10.244.1.2:40933 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000179201s
	[INFO] 10.244.1.2:36014 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0002051s
	[INFO] 10.244.0.3:37733 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000265401s
	[INFO] 10.244.0.3:52912 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148001s
	[INFO] 10.244.0.3:33147 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000143701s
	[INFO] 10.244.0.3:49893 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000536s
	[INFO] 10.244.1.2:42681 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001221s
	[INFO] 10.244.1.2:41416 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143s
	[INFO] 10.244.1.2:58254 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000241501s
	[INFO] 10.244.1.2:35844 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000197201s
	[INFO] 10.244.0.3:33559 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102201s
	[INFO] 10.244.0.3:53963 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000158701s
	[INFO] 10.244.0.3:41406 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001297s
	[INFO] 10.244.0.3:34685 - 5 "PTR IN 1.144.25.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000264001s
	[INFO] 10.244.1.2:43312 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001178s
	[INFO] 10.244.1.2:55281 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000235501s
	[INFO] 10.244.1.2:34710 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000874s
	[INFO] 10.244.1.2:57686 - 5 "PTR IN 1.144.25.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000557s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [fcf17db92b35] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 07d6393480c36cc6b464d3853a5e32028517fcba50e93adef34ce624ca099b3a1e269a86e99bf5086a15610de9e11b2980c233f8d3dcbff38f702488f0fd5328
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:53681 - 55845 "HINFO IN 162544917519141994.8165783507281513505. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.028223444s
	
	
	==> describe nodes <==
	Name:               multinode-642600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-642600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd
	                    minikube.k8s.io/name=multinode-642600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T12_18_52_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 12:18:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-642600
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 12:46:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 12:44:11 +0000   Mon, 18 Mar 2024 12:18:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 12:44:11 +0000   Mon, 18 Mar 2024 12:18:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 12:44:11 +0000   Mon, 18 Mar 2024 12:18:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 12:44:11 +0000   Mon, 18 Mar 2024 12:44:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.148.129
	  Hostname:    multinode-642600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 021cb44913fc4689ab25739f723ae3da
	  System UUID:                8a1bcbab-f132-7f42-b33a-a7db97e0afe6
	  Boot ID:                    f11360a5-920e-4374-9d22-d06f111079d8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-48qkw                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 coredns-5dd5756b68-fgn7v                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-multinode-642600                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m32s
	  kube-system                 kindnet-kpt4f                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-apiserver-multinode-642600             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  kube-system                 kube-controller-manager-multinode-642600    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-4dg79                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-multinode-642600             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 26m                    kube-proxy       
	  Normal  Starting                 2m30s                  kube-proxy       
	  Normal  Starting                 27m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27m (x8 over 27m)      kubelet          Node multinode-642600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m (x8 over 27m)      kubelet          Node multinode-642600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m (x7 over 27m)      kubelet          Node multinode-642600 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 27m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     27m                    kubelet          Node multinode-642600 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    27m                    kubelet          Node multinode-642600 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  27m                    kubelet          Node multinode-642600 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           27m                    node-controller  Node multinode-642600 event: Registered Node multinode-642600 in Controller
	  Normal  NodeReady                26m                    kubelet          Node multinode-642600 status is now: NodeReady
	  Normal  Starting                 2m39s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m39s (x8 over 2m39s)  kubelet          Node multinode-642600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m39s (x8 over 2m39s)  kubelet          Node multinode-642600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m39s (x7 over 2m39s)  kubelet          Node multinode-642600 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m20s                  node-controller  Node multinode-642600 event: Registered Node multinode-642600 in Controller
	
	
	Name:               multinode-642600-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-642600-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd
	                    minikube.k8s.io/name=multinode-642600
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T12_22_13_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 12:22:12 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-642600-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 12:40:15 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 18 Mar 2024 12:38:34 +0000   Mon, 18 Mar 2024 12:44:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 18 Mar 2024 12:38:34 +0000   Mon, 18 Mar 2024 12:44:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 18 Mar 2024 12:38:34 +0000   Mon, 18 Mar 2024 12:44:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 18 Mar 2024 12:38:34 +0000   Mon, 18 Mar 2024 12:44:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.25.159.102
	  Hostname:    multinode-642600-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 3840c114554e41ff9ded1410244d8aba
	  System UUID:                23dbf5b1-f940-4749-8caf-1ae12d869a30
	  Boot ID:                    9a3fcab5-beb6-4505-b112-82809850bba3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-hmhdf    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kindnet-d5llj               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	  kube-system                 kube-proxy-vts9f            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  NodeHasSufficientMemory  23m (x5 over 23m)  kubelet          Node multinode-642600-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x5 over 23m)  kubelet          Node multinode-642600-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x5 over 23m)  kubelet          Node multinode-642600-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           23m                node-controller  Node multinode-642600-m02 event: Registered Node multinode-642600-m02 in Controller
	  Normal  NodeReady                23m                kubelet          Node multinode-642600-m02 status is now: NodeReady
	  Normal  RegisteredNode           2m20s              node-controller  Node multinode-642600-m02 event: Registered Node multinode-642600-m02 in Controller
	  Normal  NodeNotReady             100s               node-controller  Node multinode-642600-m02 status is now: NodeNotReady
	
	
	Name:               multinode-642600-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-642600-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd
	                    minikube.k8s.io/name=multinode-642600
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T12_38_47_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 12:38:46 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-642600-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 12:39:48 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 18 Mar 2024 12:38:52 +0000   Mon, 18 Mar 2024 12:40:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 18 Mar 2024 12:38:52 +0000   Mon, 18 Mar 2024 12:40:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 18 Mar 2024 12:38:52 +0000   Mon, 18 Mar 2024 12:40:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 18 Mar 2024 12:38:52 +0000   Mon, 18 Mar 2024 12:40:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.25.157.200
	  Hostname:    multinode-642600-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 b858c7f1c1bc42a69e1927ccc26ea5ce
	  System UUID:                8c4fd36f-ab8b-5447-9df2-542afafc5ab4
	  Boot ID:                    cea0ecfe-24ab-4614-a808-1e2a7a960f26
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-thkjp       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      18m
	  kube-system                 kube-proxy-khbjt    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 18m                    kube-proxy       
	  Normal  Starting                 7m14s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  18m (x5 over 18m)      kubelet          Node multinode-642600-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x5 over 18m)      kubelet          Node multinode-642600-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x5 over 18m)      kubelet          Node multinode-642600-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                18m                    kubelet          Node multinode-642600-m03 status is now: NodeReady
	  Normal  Starting                 7m17s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m17s (x2 over 7m17s)  kubelet          Node multinode-642600-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m17s (x2 over 7m17s)  kubelet          Node multinode-642600-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m17s (x2 over 7m17s)  kubelet          Node multinode-642600-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m16s                  node-controller  Node multinode-642600-m03 event: Registered Node multinode-642600-m03 in Controller
	  Normal  NodeReady                7m11s                  kubelet          Node multinode-642600-m03 status is now: NodeReady
	  Normal  NodeNotReady             5m30s                  node-controller  Node multinode-642600-m03 status is now: NodeNotReady
	  Normal  RegisteredNode           2m20s                  node-controller  Node multinode-642600-m03 event: Registered Node multinode-642600-m03 in Controller
	
	
	==> dmesg <==
	              * this clock source is slow. Consider trying other clock sources
	[  +5.633479] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.746575] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.948336] systemd-fstab-generator[113]: Ignoring "noauto" option for root device
	[  +7.356358] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Mar18 12:42] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.196447] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[Mar18 12:43] systemd-fstab-generator[969]: Ignoring "noauto" option for root device
	[  +0.116812] kauditd_printk_skb: 73 callbacks suppressed
	[  +0.565179] systemd-fstab-generator[1008]: Ignoring "noauto" option for root device
	[  +0.224131] systemd-fstab-generator[1020]: Ignoring "noauto" option for root device
	[  +0.243543] systemd-fstab-generator[1034]: Ignoring "noauto" option for root device
	[  +2.986318] systemd-fstab-generator[1219]: Ignoring "noauto" option for root device
	[  +0.197212] systemd-fstab-generator[1231]: Ignoring "noauto" option for root device
	[  +0.228503] systemd-fstab-generator[1243]: Ignoring "noauto" option for root device
	[  +0.297734] systemd-fstab-generator[1258]: Ignoring "noauto" option for root device
	[  +0.969011] systemd-fstab-generator[1381]: Ignoring "noauto" option for root device
	[  +0.114690] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.575437] systemd-fstab-generator[1516]: Ignoring "noauto" option for root device
	[  +1.537938] kauditd_printk_skb: 44 callbacks suppressed
	[  +6.654182] kauditd_printk_skb: 30 callbacks suppressed
	[  +4.384606] systemd-fstab-generator[2563]: Ignoring "noauto" option for root device
	[  +7.200668] kauditd_printk_skb: 70 callbacks suppressed
	
	
	==> etcd [8e7911b58c58] <==
	{"level":"info","ts":"2024-03-18T12:43:26.375647Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-18T12:43:26.375735Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-18T12:43:26.377469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 switched to configuration voters=(8680198388102902480)"}
	{"level":"info","ts":"2024-03-18T12:43:26.377568Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"31713adf8492fbc4","local-member-id":"78764271becab2d0","added-peer-id":"78764271becab2d0","added-peer-peer-urls":["https://172.25.151.112:2380"]}
	{"level":"info","ts":"2024-03-18T12:43:26.378749Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"31713adf8492fbc4","local-member-id":"78764271becab2d0","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T12:43:26.378942Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T12:43:26.380244Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-18T12:43:26.380886Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"78764271becab2d0","initial-advertise-peer-urls":["https://172.25.148.129:2380"],"listen-peer-urls":["https://172.25.148.129:2380"],"advertise-client-urls":["https://172.25.148.129:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.25.148.129:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-18T12:43:26.383141Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.25.148.129:2380"}
	{"level":"info","ts":"2024-03-18T12:43:26.383279Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.25.148.129:2380"}
	{"level":"info","ts":"2024-03-18T12:43:26.393018Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-18T12:43:27.621966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-18T12:43:27.622399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-18T12:43:27.622624Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 received MsgPreVoteResp from 78764271becab2d0 at term 2"}
	{"level":"info","ts":"2024-03-18T12:43:27.622825Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 became candidate at term 3"}
	{"level":"info","ts":"2024-03-18T12:43:27.624231Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 received MsgVoteResp from 78764271becab2d0 at term 3"}
	{"level":"info","ts":"2024-03-18T12:43:27.624426Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78764271becab2d0 became leader at term 3"}
	{"level":"info","ts":"2024-03-18T12:43:27.624696Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 78764271becab2d0 elected leader 78764271becab2d0 at term 3"}
	{"level":"info","ts":"2024-03-18T12:43:27.641347Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"78764271becab2d0","local-member-attributes":"{Name:multinode-642600 ClientURLs:[https://172.25.148.129:2379]}","request-path":"/0/members/78764271becab2d0/attributes","cluster-id":"31713adf8492fbc4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-18T12:43:27.641882Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T12:43:27.64409Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T12:43:27.644373Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-18T12:43:27.641995Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T12:43:27.650212Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.25.148.129:2379"}
	{"level":"info","ts":"2024-03-18T12:43:27.651053Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 12:46:04 up 4 min,  0 users,  load average: 0.77, 0.40, 0.15
	Linux multinode-642600 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [5cf42651cb21] <==
	I0318 12:40:04.067846       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:40:14.082426       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:40:14.082921       1 main.go:227] handling current node
	I0318 12:40:14.082946       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:40:14.082956       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:40:14.083174       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:40:14.083247       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:40:24.098060       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:40:24.098161       1 main.go:227] handling current node
	I0318 12:40:24.098178       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:40:24.098187       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:40:24.098316       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:40:24.098324       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:40:34.335103       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:40:34.335169       1 main.go:227] handling current node
	I0318 12:40:34.335185       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:40:34.335192       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:40:34.335470       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:40:34.335488       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:40:44.342962       1 main.go:223] Handling node with IPs: map[172.25.151.112:{}]
	I0318 12:40:44.343122       1 main.go:227] handling current node
	I0318 12:40:44.343139       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:40:44.343148       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:40:44.343738       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:40:44.343780       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [9fec05a61d2a] <==
	I0318 12:45:14.005875       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:45:24.103450       1 main.go:223] Handling node with IPs: map[172.25.148.129:{}]
	I0318 12:45:24.103507       1 main.go:227] handling current node
	I0318 12:45:24.103524       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:45:24.103532       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:45:24.104126       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:45:24.104194       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:45:34.119215       1 main.go:223] Handling node with IPs: map[172.25.148.129:{}]
	I0318 12:45:34.119840       1 main.go:227] handling current node
	I0318 12:45:34.120044       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:45:34.120252       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:45:34.203580       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:45:34.203775       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:45:44.218808       1 main.go:223] Handling node with IPs: map[172.25.148.129:{}]
	I0318 12:45:44.218944       1 main.go:227] handling current node
	I0318 12:45:44.218961       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:45:44.218969       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:45:44.219497       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:45:44.219686       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	I0318 12:45:54.228679       1 main.go:223] Handling node with IPs: map[172.25.148.129:{}]
	I0318 12:45:54.228785       1 main.go:227] handling current node
	I0318 12:45:54.228802       1 main.go:223] Handling node with IPs: map[172.25.159.102:{}]
	I0318 12:45:54.228811       1 main.go:250] Node multinode-642600-m02 has CIDR [10.244.1.0/24] 
	I0318 12:45:54.228995       1 main.go:223] Handling node with IPs: map[172.25.157.200:{}]
	I0318 12:45:54.229028       1 main.go:250] Node multinode-642600-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [a48a6d310b86] <==
	I0318 12:43:30.386163       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0318 12:43:30.386327       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0318 12:43:30.474963       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0318 12:43:30.476622       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0318 12:43:30.496736       1 shared_informer.go:318] Caches are synced for configmaps
	I0318 12:43:30.497067       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0318 12:43:30.497511       1 aggregator.go:166] initial CRD sync complete...
	I0318 12:43:30.498503       1 autoregister_controller.go:141] Starting autoregister controller
	I0318 12:43:30.498662       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0318 12:43:30.498825       1 cache.go:39] Caches are synced for autoregister controller
	I0318 12:43:30.570075       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0318 12:43:30.585880       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0318 12:43:30.624565       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0318 12:43:30.681515       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0318 12:43:30.681604       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0318 12:43:31.410513       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0318 12:43:31.917736       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.25.148.129 172.25.151.112]
	I0318 12:43:31.919293       1 controller.go:624] quota admission added evaluator for: endpoints
	I0318 12:43:31.929122       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0318 12:43:34.160688       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0318 12:43:34.367742       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0318 12:43:34.406080       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0318 12:43:34.542647       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0318 12:43:34.562855       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0318 12:43:51.920595       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.25.148.129]
	
	
	==> kube-controller-manager [14ae9398d33b] <==
	I0318 12:43:43.585098       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0318 12:43:43.586663       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0318 12:43:43.590461       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 12:43:43.597830       1 shared_informer.go:318] Caches are synced for job
	I0318 12:43:43.635734       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0318 12:43:43.658493       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 12:43:43.686534       1 shared_informer.go:318] Caches are synced for disruption
	I0318 12:43:44.024395       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 12:43:44.024760       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0318 12:43:44.048280       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 12:44:11.303411       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:44:13.533509       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-48qkw" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-48qkw"
	I0318 12:44:13.534203       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68-fgn7v" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-5dd5756b68-fgn7v"
	I0318 12:44:13.534478       1 event.go:307] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I0318 12:44:23.562573       1 event.go:307] "Event occurred" object="multinode-642600-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-642600-m02 status is now: NodeNotReady"
	I0318 12:44:23.591486       1 event.go:307] "Event occurred" object="kube-system/kindnet-d5llj" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:23.614671       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-vts9f" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:23.639496       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-hmhdf" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:44:23.661949       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="21.740356ms"
	I0318 12:44:23.663289       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="50.499µs"
	I0318 12:44:37.149797       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="76.1µs"
	I0318 12:44:37.209300       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="28.125704ms"
	I0318 12:44:37.209415       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="54.4µs"
	I0318 12:44:37.245284       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="23.227968ms"
	I0318 12:44:37.254358       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="3.872028ms"
	
	
	==> kube-controller-manager [a54be4436901] <==
	I0318 12:23:05.441638       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="14.350047ms"
	I0318 12:23:05.441876       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="105µs"
	I0318 12:27:09.073772       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600-m03\" does not exist"
	I0318 12:27:09.075345       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:27:09.095707       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-642600-m03" podCIDRs=["10.244.2.0/24"]
	I0318 12:27:09.110695       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-khbjt"
	I0318 12:27:09.110730       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-thkjp"
	I0318 12:27:12.715112       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-642600-m03"
	I0318 12:27:12.715611       1 event.go:307] "Event occurred" object="multinode-642600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600-m03 event: Registered Node multinode-642600-m03 in Controller"
	I0318 12:27:30.856729       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:35:52.853028       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:35:52.854041       1 event.go:307] "Event occurred" object="multinode-642600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-642600-m03 status is now: NodeNotReady"
	I0318 12:35:52.871920       1 event.go:307] "Event occurred" object="kube-system/kindnet-thkjp" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:35:52.891158       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-khbjt" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:38:40.101072       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:38:42.930337       1 event.go:307] "Event occurred" object="multinode-642600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-642600-m03 event: Removing Node multinode-642600-m03 from Controller"
	I0318 12:38:46.825246       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:38:46.827225       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-642600-m03\" does not exist"
	I0318 12:38:46.865011       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-642600-m03" podCIDRs=["10.244.3.0/24"]
	I0318 12:38:47.931681       1 event.go:307] "Event occurred" object="multinode-642600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-642600-m03 event: Registered Node multinode-642600-m03 in Controller"
	I0318 12:38:52.975724       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:40:33.280094       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-642600-m02"
	I0318 12:40:33.281180       1 event.go:307] "Event occurred" object="multinode-642600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-642600-m03 status is now: NodeNotReady"
	I0318 12:40:33.601041       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-khbjt" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:40:33.698293       1 event.go:307] "Event occurred" object="kube-system/kindnet-thkjp" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	
	==> kube-proxy [4bbad08fe59a] <==
	I0318 12:19:04.970720       1 server_others.go:69] "Using iptables proxy"
	I0318 12:19:04.997380       1 node.go:141] Successfully retrieved node IP: 172.25.151.112
	I0318 12:19:05.099028       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 12:19:05.099065       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 12:19:05.102885       1 server_others.go:152] "Using iptables Proxier"
	I0318 12:19:05.103013       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 12:19:05.103652       1 server.go:846] "Version info" version="v1.28.4"
	I0318 12:19:05.103704       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:19:05.105505       1 config.go:188] "Starting service config controller"
	I0318 12:19:05.106093       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 12:19:05.106131       1 config.go:97] "Starting endpoint slice config controller"
	I0318 12:19:05.106138       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 12:19:05.107424       1 config.go:315] "Starting node config controller"
	I0318 12:19:05.107456       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 12:19:05.206699       1 shared_informer.go:318] Caches are synced for service config
	I0318 12:19:05.206811       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 12:19:05.207857       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [575b41a3a85a] <==
	I0318 12:43:33.336778       1 server_others.go:69] "Using iptables proxy"
	I0318 12:43:33.550433       1 node.go:141] Successfully retrieved node IP: 172.25.148.129
	I0318 12:43:33.793084       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 12:43:33.793109       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 12:43:33.796954       1 server_others.go:152] "Using iptables Proxier"
	I0318 12:43:33.798936       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 12:43:33.800347       1 server.go:846] "Version info" version="v1.28.4"
	I0318 12:43:33.800569       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:43:33.803648       1 config.go:188] "Starting service config controller"
	I0318 12:43:33.805156       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 12:43:33.805421       1 config.go:97] "Starting endpoint slice config controller"
	I0318 12:43:33.805584       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 12:43:33.808628       1 config.go:315] "Starting node config controller"
	I0318 12:43:33.808736       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 12:43:33.905580       1 shared_informer.go:318] Caches are synced for service config
	I0318 12:43:33.907041       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 12:43:33.909416       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [47777d4c0b90] <==
	E0318 12:18:47.563806       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0318 12:18:47.597770       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0318 12:18:47.597873       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0318 12:18:47.684794       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 12:18:47.685008       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 12:18:47.685352       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0318 12:18:47.685509       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0318 12:18:47.840132       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0318 12:18:47.840303       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0318 12:18:47.879838       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0318 12:18:47.880363       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0318 12:18:47.906171       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0318 12:18:47.906493       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0318 12:18:48.059997       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 12:18:48.060049       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0318 12:18:48.096160       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 12:18:48.096304       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 12:18:48.096504       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0318 12:18:48.096662       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 12:18:48.133175       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0318 12:18:48.133469       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0318 12:18:48.135066       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0318 12:18:48.135196       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0318 12:18:50.022459       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0318 12:40:51.995231       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [bd1e4f4d262e] <==
	I0318 12:43:27.649061       1 serving.go:348] Generated self-signed cert in-memory
	W0318 12:43:30.548831       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0318 12:43:30.549092       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 12:43:30.549282       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0318 12:43:30.549461       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 12:43:30.613305       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0318 12:43:30.613417       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:43:30.618512       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 12:43:30.619171       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0318 12:43:30.619276       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 12:43:30.620071       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 12:43:30.720411       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.032626    1523 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.032722    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume podName:7bc52797-b4bd-4046-b3d5-fae9c8ccd13b nodeName:}" failed. No retries permitted until 2024-03-18 12:44:35.0327033 +0000 UTC m=+71.043942936 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7bc52797-b4bd-4046-b3d5-fae9c8ccd13b-config-volume") pod "coredns-5dd5756b68-fgn7v" (UID: "7bc52797-b4bd-4046-b3d5-fae9c8ccd13b") : object "kube-system"/"coredns" not registered
	Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.134727    1523 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.134857    1523 projected.go:198] Error preparing data for projected volume kube-api-access-9g8n5 for pod default/busybox-5b5d89c9d6-48qkw: object "default"/"kube-root-ca.crt" not registered
	Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.135073    1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5 podName:45969c0e-ac43-459e-95c0-86f7b76947db nodeName:}" failed. No retries permitted until 2024-03-18 12:44:35.13505028 +0000 UTC m=+71.146289916 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-9g8n5" (UniqueName: "kubernetes.io/projected/45969c0e-ac43-459e-95c0-86f7b76947db-kube-api-access-9g8n5") pod "busybox-5b5d89c9d6-48qkw" (UID: "45969c0e-ac43-459e-95c0-86f7b76947db") : object "default"/"kube-root-ca.crt" not registered
	Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.360260    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-48qkw" podUID="45969c0e-ac43-459e-95c0-86f7b76947db"
	Mar 18 12:44:03 multinode-642600 kubelet[1523]: E0318 12:44:03.360354    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-fgn7v" podUID="7bc52797-b4bd-4046-b3d5-fae9c8ccd13b"
	Mar 18 12:44:04 multinode-642600 kubelet[1523]: I0318 12:44:04.124509    1523 scope.go:117] "RemoveContainer" containerID="996fb0f2ade69129acd747fc5146ef4295cc7ebd79cae8e8f881a21393ddb74a"
	Mar 18 12:44:04 multinode-642600 kubelet[1523]: I0318 12:44:04.125880    1523 scope.go:117] "RemoveContainer" containerID="787ade2ea2cd019bbcde5523b750659f59b0187d9de4b757fba8a14f9a126460"
	Mar 18 12:44:04 multinode-642600 kubelet[1523]: E0318 12:44:04.127355    1523 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d2718b8a-26a9-4c86-bf9a-221d1ee23ceb)\"" pod="kube-system/storage-provisioner" podUID="d2718b8a-26a9-4c86-bf9a-221d1ee23ceb"
	Mar 18 12:44:17 multinode-642600 kubelet[1523]: I0318 12:44:17.359956    1523 scope.go:117] "RemoveContainer" containerID="787ade2ea2cd019bbcde5523b750659f59b0187d9de4b757fba8a14f9a126460"
	Mar 18 12:44:24 multinode-642600 kubelet[1523]: I0318 12:44:24.325657    1523 scope.go:117] "RemoveContainer" containerID="301c80f8b38cb79f051755af6af0fb604c0eee0689fd1f2d22a66e0969a9583f"
	Mar 18 12:44:24 multinode-642600 kubelet[1523]: I0318 12:44:24.374630    1523 scope.go:117] "RemoveContainer" containerID="4b94d396876e5c7e3b8c69b01560d10ad95ff183ab3cc78a194276537cfd6cf5"
	Mar 18 12:44:24 multinode-642600 kubelet[1523]: E0318 12:44:24.399375    1523 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 12:44:24 multinode-642600 kubelet[1523]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 12:44:24 multinode-642600 kubelet[1523]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 12:44:24 multinode-642600 kubelet[1523]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 12:44:24 multinode-642600 kubelet[1523]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 12:44:35 multinode-642600 kubelet[1523]: I0318 12:44:35.962288    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1b2432b0ed66a1175586c13232eb9b9239f18a4f9a86e2a0c5f48c1407fdb14"
	Mar 18 12:44:36 multinode-642600 kubelet[1523]: I0318 12:44:36.079817    1523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1090dd57409807a15613607fd810b67863a9dd9c5a8512d7a6720906641c7f26"
	Mar 18 12:45:24 multinode-642600 kubelet[1523]: E0318 12:45:24.398083    1523 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 12:45:24 multinode-642600 kubelet[1523]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 12:45:24 multinode-642600 kubelet[1523]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 12:45:24 multinode-642600 kubelet[1523]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 12:45:24 multinode-642600 kubelet[1523]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 12:45:52.988877    7292 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-642600 -n multinode-642600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-642600 -n multinode-642600: (12.8199059s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-642600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (412.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (299.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-148100 --driver=hyperv
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-148100 --driver=hyperv: exit status 1 (4m59.6533525s)

                                                
                                                
-- stdout --
	* [NoKubernetes-148100] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18431
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "NoKubernetes-148100" primary control-plane node in "NoKubernetes-148100" cluster
	* Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 13:03:31.670586    6640 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-148100 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-148100 -n NoKubernetes-148100
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-148100 -n NoKubernetes-148100: exit status 7 (302.4342ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 13:08:31.034688    5716 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-148100" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (299.96s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (10800.607s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-320800 --alsologtostderr -v=1 --driver=hyperv
panic: test timed out after 3h0m0s
running tests:
	TestForceSystemdEnv (3m18s)
	TestNetworkPlugins (6m16s)
	TestPause (6m1s)
	TestPause/serial (6m1s)
	TestPause/serial/SecondStartNoReconfiguration (0s)
	TestStartStop (6m16s)
	TestStartStop/group/no-preload (1m30s)
	TestStartStop/group/no-preload/serial (1m30s)
	TestStartStop/group/no-preload/serial/FirstStart (1m30s)
	TestStartStop/group/old-k8s-version (2m35s)
	TestStartStop/group/old-k8s-version/serial (2m35s)
	TestStartStop/group/old-k8s-version/serial/FirstStart (2m35s)

                                                
                                                
goroutine 2368 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 2 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc00094a9c0, 0xc0008d1bb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc000108918, {0x53744c0, 0x2a, 0x2a}, {0x30d612e?, 0xfe81af?, 0x5396ca0?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc00092d900)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc00092d900)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 7 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000110e80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 2222 [chan receive, 7 minutes]:
testing.(*testContext).waitParallel(0xc0004ec8c0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00094ab60)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00094ab60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00094ab60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00094ab60, 0xc000110a00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2268
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 66 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 12
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 2268 [chan receive, 7 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc00231bba0, 0xc0023fc180)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2083
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 152 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x404dce0, 0xc000544a20}, 0xc000437f50, 0xc000437f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x404dce0, 0xc000544a20}, 0xf8?, 0xc000437f50, 0xc000437f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x404dce0?, 0xc000544a20?}, 0xc00094ad00?, 0x1077f40?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x1078ea5?, 0xc00094ad00?, 0xc0008cc000?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 200
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 153 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 152
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 151 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0006ab3d0, 0x3d)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x2b95a80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00200b0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0006ab400)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0023fa000, {0x402b460, 0xc000aa4030}, 0x1, 0xc000544a20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0023fa000, 0x3b9aca00, 0x0, 0x1, 0xc000544a20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 200
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 644 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0x7ffe44dd4de0?, {0xc0008b9a80?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x438, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0025e2570)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000a1e160)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc000a1e160)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0000eda00, 0xc000a1e160)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestForceSystemdEnv(0xc0000eda00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:155 +0x3ba
testing.tRunner(0xc0000eda00, 0x3adec48)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 199 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00200b200)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 187
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 200 [chan receive, 173 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0006ab400, 0xc000544a20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 187
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2265 [chan receive, 2 minutes]:
testing.(*T).Run(0xc00231b6c0, {0x307cd73?, 0x0?}, 0xc000070480)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00231b6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc00231b6c0, 0xc0006aac00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2261
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2225 [chan receive, 7 minutes]:
testing.(*testContext).waitParallel(0xc0004ec8c0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00094b040)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00094b040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00094b040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00094b040, 0xc000a40a00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2268
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 812 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0023c4960)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 808
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2239 [chan receive, 7 minutes]:
testing.(*testContext).waitParallel(0xc0004ec8c0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002210340)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002210340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002210340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc002210340, 0xc0004cc480)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2268
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2262 [chan receive, 2 minutes]:
testing.(*T).Run(0xc00231b1e0, {0x307cd73?, 0x0?}, 0xc0004cc180)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00231b1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc00231b1e0, 0xc0006aaac0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2261
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2224 [chan receive, 7 minutes]:
testing.(*testContext).waitParallel(0xc0004ec8c0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00094aea0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00094aea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00094aea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00094aea0, 0xc000a40980)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2268
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 708 [IO wait, 162 minutes]:
internal/poll.runtime_pollWait(0x21f78774430, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0000a9408?, 0x0?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc001fd4520, 0xc002899bb0)
	/usr/local/go/src/internal/poll/fd_windows.go:175 +0xe6
internal/poll.(*FD).acceptOne(0xc001fd4508, 0x3f8, {0xc00021b0e0?, 0x0?, 0x0?}, 0xc0000a9008?)
	/usr/local/go/src/internal/poll/fd_windows.go:944 +0x67
internal/poll.(*FD).Accept(0xc001fd4508, 0xc002899d90)
	/usr/local/go/src/internal/poll/fd_windows.go:978 +0x1bc
net.(*netFD).accept(0xc001fd4508)
	/usr/local/go/src/net/fd_windows.go:178 +0x54
net.(*TCPListener).accept(0xc00060a7e0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc00060a7e0)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0007c20f0, {0x4041830, 0xc00060a7e0})
	/usr/local/go/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc0007c20f0)
	/usr/local/go/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc00094a820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 609
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 2376 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0xc002e48800?, {0xc002341b20?, 0x103b205?, 0xab?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x6d?, 0x39?, 0xb0?, 0xa?, 0xc002341c08?, 0xf328db?, 0xf28c66?, 0xfa8d65?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x550, {0xc0020aba07?, 0x5f9, 0xc0020ab800?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc002400288?, {0xc0020aba07?, 0xf65210?, 0x800?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002400288, {0xc0020aba07, 0x5f9, 0x5f9})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00094c1a8, {0xc0020aba07?, 0xc002341d98?, 0x207?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001fae1b0, {0x402a020, 0xc002b24078})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x402a160, 0xc001fae1b0}, {0x402a020, 0xc002b24078}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x402a160, 0xc001fae1b0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x5328a20?, {0x402a160?, 0xc001fae1b0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x402a160, 0xc001fae1b0}, {0x402a0e0, 0xc00094c1a8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc001fb8000?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2375
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 813 [chan receive, 150 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001fb8480, 0xc000544a20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 808
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2290 [chan receive, 7 minutes]:
testing.(*testContext).waitParallel(0xc0004ec8c0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002211040)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002211040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002211040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc002211040, 0xc0004cc600)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2268
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2374 [chan receive, 2 minutes]:
testing.(*T).Run(0xc00094b6c0, {0x308633b?, 0x60400000004?}, 0xc0004cc280)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc00094b6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc00094b6c0, 0xc0004cc180)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2262
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2366 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0xc002063b10?, {0xc002063b20?, 0xf47f45?, 0x53a3b60?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x10000c0001f9f67?, 0xc002063b80?, 0xf3fe76?, 0x5424100?, 0xc002063c08?, 0xf32a45?, 0x0?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x618, {0xc0021fd52b?, 0x2ad5, 0xfe42bf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001f92a08?, {0xc0021fd52b?, 0xf6c25e?, 0x8000?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001f92a08, {0xc0021fd52b, 0x2ad5, 0x2ad5})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000a0a868, {0xc0021fd52b?, 0x4a3?, 0x3e3a?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0025e4180, {0x402a020, 0xc002b24130})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x402a160, 0xc0025e4180}, {0x402a020, 0xc002b24130}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x402a160, 0xc0025e4180})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x5328a20?, {0x402a160?, 0xc0025e4180?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x402a160, 0xc0025e4180}, {0x402a0e0, 0xc000a0a868}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc00267c360?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2364
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 865 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x404dce0, 0xc000544a20}, 0xc002343f50, 0xc002343f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x404dce0, 0xc000544a20}, 0xa0?, 0xc002343f50, 0xc002343f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x404dce0?, 0xc000544a20?}, 0x0?, 0x1077f40?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc002343fd0?, 0x10be6e4?, 0xc000a0c2d0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 813
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 864 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc001fb8450, 0x35)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x2b95a80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0023c4840)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001fb8480)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc002823590, {0x402b460, 0xc00208c510}, 0x1, 0xc000544a20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc002823590, 0x3b9aca00, 0x0, 0x1, 0xc000544a20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 813
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 1249 [chan send, 147 minutes]:
os/exec.(*Cmd).watchCtx(0xc002752f20, 0xc002746ea0)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1248
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 882 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 865
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2390 [runnable, locked to thread]:
syscall.SyscallN(0xc00079dc00?, {0xc0023bbb20?, 0xf47f45?, 0x5424100?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc0028feb59?, 0xc0023bbb80?, 0xf3fe76?, 0x5424100?, 0xc0023bbc08?, 0xf32a45?, 0x0?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x6fc, {0xc0026201e5?, 0x21b, 0xfe42bf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc0021bcc88?, {0xc0026201e5?, 0xf6c25e?, 0x400?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0021bcc88, {0xc0026201e5, 0x21b, 0x21b})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc002b24108, {0xc0026201e5?, 0xc0023bbd98?, 0x13a?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0021ae1b0, {0x402a020, 0xc002b24148})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x402a160, 0xc0021ae1b0}, {0x402a020, 0xc002b24148}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x402a160, 0xc0021ae1b0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x5328a20?, {0x402a160?, 0xc0021ae1b0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x402a160, 0xc0021ae1b0}, {0x402a0e0, 0xc002b24108}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002af7260?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2388
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2223 [chan receive, 7 minutes]:
testing.(*testContext).waitParallel(0xc0004ec8c0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00094ad00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00094ad00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00094ad00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00094ad00, 0xc000a40900)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2268
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2241 [chan receive, 7 minutes]:
testing.(*testContext).waitParallel(0xc0004ec8c0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002210ea0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002210ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002210ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc002210ea0, 0xc0004cc580)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2268
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2264 [chan receive, 7 minutes]:
testing.(*testContext).waitParallel(0xc0004ec8c0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00231b520)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00231b520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00231b520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00231b520, 0xc0006aabc0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2261
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2284 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0xfa8caa?, {0xc002007b20?, 0xf47f45?, 0x5424100?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc00078e600?, 0xc002007b80?, 0xf3fe76?, 0x5424100?, 0xc002007c08?, 0xf32a45?, 0x21f32d40598?, 0xc00251e34d?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x664, {0xc00257c250?, 0x5b0, 0xfe42bf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc00268e788?, {0xc00257c250?, 0xc002007c50?, 0x800?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc00268e788, {0xc00257c250, 0x5b0, 0x5b0})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000a0a708, {0xc00257c250?, 0x0?, 0x211?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0025980f0, {0x402a020, 0xc00011ac50})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x402a160, 0xc0025980f0}, {0x402a020, 0xc00011ac50}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00073e930?, {0x402a160, 0xc0025980f0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x5328a20?, {0x402a160?, 0xc0025980f0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x402a160, 0xc0025980f0}, {0x402a0e0, 0xc000a0a708}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0x3aded40?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 644
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2267 [chan receive, 7 minutes]:
testing.(*testContext).waitParallel(0xc0004ec8c0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00231ba00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00231ba00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00231ba00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00231ba00, 0xc0006ab200)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2261
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1079 [chan send, 143 minutes]:
os/exec.(*Cmd).watchCtx(0xc002338000, 0xc0005441e0)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 834
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2388 [syscall, locked to thread]:
syscall.SyscallN(0x7ffe44dd4de0?, {0xc00006ba10?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x6b0, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0025e8600)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc002752000)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc002752000)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc002211380, 0xc002752000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateStartNoReconfigure({0x404db20, 0xc000408a80}, 0xc002211380, {0xc0026881b0?, 0xc02930207c?})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:92 +0x245
k8s.io/minikube/test/integration.TestPause.func1.1(0xc002211380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:66 +0x43
testing.tRunner(0xc002211380, 0xc001fb8000)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2344
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2363 [chan receive, 2 minutes]:
testing.(*T).Run(0xc0022109c0, {0x308633b?, 0x60400000004?}, 0xc000070500)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0022109c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0022109c0, 0xc000070480)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2265
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2266 [chan receive, 7 minutes]:
testing.(*testContext).waitParallel(0xc0004ec8c0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00231b860)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00231b860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00231b860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00231b860, 0xc0006aad40)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2261
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2286 [select, 3 minutes]:
os/exec.(*Cmd).watchCtx(0xc000a1e160, 0xc00259c120)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 644
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2367 [select, 2 minutes]:
os/exec.(*Cmd).watchCtx(0xc002338000, 0xc001fa01e0)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2364
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2364 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x7ffe44dd4de0?, {0xc00361fae0?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x680, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc00201a750)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc002338000)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc002338000)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0022111e0, 0xc002338000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateFirstStart({0x404db20?, 0xc0005b4070?}, 0xc0022111e0, {0xc002082000?, 0x65f84297?}, {0xc00e22db58?, 0xc00361ff60?}, {0x1077613?, 0xfc8eaf?}, {0xc0020d0000, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:186 +0xd5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0022111e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0022111e0, 0xc000070500)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2363
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2389 [runnable, locked to thread]:
syscall.SyscallN(0x0?, {0xc002045b20?, 0xf47f45?, 0x5424100?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xf32db9?, 0xc002045b80?, 0xf3fe76?, 0x5424100?, 0xc002045c08?, 0xf32a45?, 0x21f32d40598?, 0x35?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x724, {0xc000a2e600?, 0x200, 0xc000a2e600?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc0021bc288?, {0xc000a2e600?, 0xf6c25e?, 0x200?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0021bc288, {0xc000a2e600, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc002b240b0, {0xc000a2e600?, 0xc002045d98?, 0x0?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0021ae180, {0x402a020, 0xc000a0a018})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x402a160, 0xc0021ae180}, {0x402a020, 0xc000a0a018}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x402a160, 0xc0021ae180})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x5328a20?, {0x402a160?, 0xc0021ae180?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x402a160, 0xc0021ae180}, {0x402a0e0, 0xc002b240b0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc00242a120?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2388
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2083 [chan receive, 7 minutes]:
testing.(*T).Run(0xc00231a000, {0x307b86f?, 0xf9f56d?}, 0xc0023fc180)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc00231a000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc00231a000, 0x3adecf0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2131 [chan receive, 7 minutes]:
testing.(*T).Run(0xc002210000, {0x307b86f?, 0x1077613?}, 0x3adef10)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc002210000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc002210000, 0x3aded38)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2377 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x1e?, {0xc000aefb20?, 0xf47f45?, 0xc000aefb38?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc00209f80a?, 0x0?, 0x7f6?, 0x180a?, 0xc00209e000?, 0x1?, 0x246?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x69c, {0xc00209fce6?, 0x31a, 0xfe42bf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc002400788?, {0xc00209fce6?, 0x74736f48205d5b3a?, 0x52444943796c6e4f?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002400788, {0xc00209fce6, 0x31a, 0x31a})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00094c1c0, {0xc00209fce6?, 0xc001fdd500?, 0x1000?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001fae270, {0x402a020, 0xc000a0a070})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x402a160, 0xc001fae270}, {0x402a020, 0xc000a0a070}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000aefe78?, {0x402a160, 0xc001fae270})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x5328a20?, {0x402a160?, 0xc001fae270?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x402a160, 0xc001fae270}, {0x402a0e0, 0xc00094c1c0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000544600?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2375
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2261 [chan receive, 7 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc00231a4e0, 0x3adef10)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2131
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2269 [chan receive, 7 minutes]:
testing.(*testContext).waitParallel(0xc0004ec8c0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00231bd40)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00231bd40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00231bd40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00231bd40, 0xc000aac080)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2268
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2085 [chan receive, 7 minutes]:
testing.(*T).Run(0xc00231aea0, {0x307cd73?, 0xd18c2e2800?}, 0xc0025993b0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestPause(0xc00231aea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:41 +0x159
testing.tRunner(0xc00231aea0, 0x3aded08)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2344 [chan receive]:
testing.(*T).Run(0xc00094b1e0, {0x30b9a7e?, 0x24?}, 0xc001fb8000)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestPause.func1(0xc00094b1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:65 +0x1ee
testing.tRunner(0xc00094b1e0, 0xc0025993b0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2085
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2263 [chan receive, 7 minutes]:
testing.(*testContext).waitParallel(0xc0004ec8c0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00231b380)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00231b380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00231b380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00231b380, 0xc0006aab80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2261
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2365 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0xc001fa5b10?, {0xc001fa5b20?, 0xf47f45?, 0x53a3b60?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x10000000000006d?, 0xc001fa5b80?, 0xf3fe76?, 0x5424100?, 0xc001fa5c08?, 0xf328db?, 0xf28c66?, 0x20000?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x714, {0xc000627df8?, 0x208, 0xfe42bf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001f92508?, {0xc000627df8?, 0xf6c211?, 0x400?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001f92508, {0xc000627df8, 0x208, 0x208})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000a0a838, {0xc000627df8?, 0xc0028a81c0?, 0x6d?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0025e4150, {0x402a020, 0xc00094c258})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x402a160, 0xc0025e4150}, {0x402a020, 0xc00094c258}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc001fa5e78?, {0x402a160, 0xc0025e4150})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x5328a20?, {0x402a160?, 0xc0025e4150?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x402a160, 0xc0025e4150}, {0x402a0e0, 0xc000a0a838}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000544540?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2364
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2240 [chan receive, 7 minutes]:
testing.(*testContext).waitParallel(0xc0004ec8c0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002210d00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002210d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002210d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc002210d00, 0xc0004cc500)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2268
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2375 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x7ffe44dd4de0?, {0xc002897ae0?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x44c, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc000a09860)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc00012ac60)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc00012ac60)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc00094b860, 0xc00012ac60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateFirstStart({0x404db20?, 0xc00041e070?}, 0xc00094b860, {0xc000790000?, 0x65f84255?}, {0xc019ffa8d4?, 0xc002897f60?}, {0x1077613?, 0xfc8eaf?}, {0xc000598180, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:186 +0xd5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc00094b860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc00094b860, 0xc0004cc280)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2374
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2378 [select, 2 minutes]:
os/exec.(*Cmd).watchCtx(0xc00012ac60, 0xc0027461e0)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2375
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2391 [select]:
os/exec.(*Cmd).watchCtx(0xc002752000, 0xc000544600)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2388
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2285 [syscall, locked to thread]:
syscall.SyscallN(0xc002163b10?, {0xc002163b20?, 0xf47f45?, 0x53a3b60?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x10000c0025e0377?, 0xc002163b80?, 0xf3fe76?, 0x5424100?, 0xc002163c08?, 0xf32a45?, 0x0?, 0x10000?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x6f0, {0xc00203654f?, 0x7ab1, 0xfe42bf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc00268ec88?, {0xc00203654f?, 0x7be9?, 0x7be9?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc00268ec88, {0xc00203654f, 0x7ab1, 0x7ab1})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000a0a748, {0xc00203654f?, 0x35fa?, 0x7e06?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002598120, {0x402a020, 0xc002b24018})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x402a160, 0xc002598120}, {0x402a020, 0xc002b24018}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc002163e78?, {0x402a160, 0xc002598120})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x5328a20?, {0x402a160?, 0xc002598120?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x402a160, 0xc002598120}, {0x402a0e0, 0xc000a0a748}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000544f60?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 644
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                    

Test pass (160/206)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 16.51
4 TestDownloadOnly/v1.20.0/preload-exists 0.08
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.42
9 TestDownloadOnly/v1.20.0/DeleteAll 1.53
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 1.54
12 TestDownloadOnly/v1.28.4/json-events 11.8
13 TestDownloadOnly/v1.28.4/preload-exists 0
16 TestDownloadOnly/v1.28.4/kubectl 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.34
18 TestDownloadOnly/v1.28.4/DeleteAll 1.72
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 1.4
21 TestDownloadOnly/v1.29.0-rc.2/json-events 11.27
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.32
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 1.47
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 1.37
30 TestBinaryMirror 7.31
31 TestOffline 425.09
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.28
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.29
36 TestAddons/Setup 399.48
39 TestAddons/parallel/Ingress 68.13
40 TestAddons/parallel/InspektorGadget 27.6
41 TestAddons/parallel/MetricsServer 21.7
42 TestAddons/parallel/HelmTiller 33.07
44 TestAddons/parallel/CSI 95.19
45 TestAddons/parallel/Headlamp 37.33
46 TestAddons/parallel/CloudSpanner 22.3
47 TestAddons/parallel/LocalPath 87.17
48 TestAddons/parallel/NvidiaDevicePlugin 22.44
49 TestAddons/parallel/Yakd 6.02
52 TestAddons/serial/GCPAuth/Namespaces 0.37
53 TestAddons/StoppedEnableDisable 55.27
54 TestCertOptions 362.44
56 TestDockerFlags 440.84
57 TestForceSystemdFlag 275.88
65 TestErrorSpam/start 18
66 TestErrorSpam/status 37.84
67 TestErrorSpam/pause 23.79
68 TestErrorSpam/unpause 23.76
69 TestErrorSpam/stop 63.43
72 TestFunctional/serial/CopySyncFile 0.03
73 TestFunctional/serial/StartWithProxy 247.3
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 129.18
76 TestFunctional/serial/KubeContext 0.13
77 TestFunctional/serial/KubectlGetPods 0.24
80 TestFunctional/serial/CacheCmd/cache/add_remote 27.19
81 TestFunctional/serial/CacheCmd/cache/add_local 12.19
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.29
83 TestFunctional/serial/CacheCmd/cache/list 0.29
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 9.74
85 TestFunctional/serial/CacheCmd/cache/cache_reload 37.5
86 TestFunctional/serial/CacheCmd/cache/delete 0.57
87 TestFunctional/serial/MinikubeKubectlCmd 0.48
89 TestFunctional/serial/ExtraConfig 130.71
90 TestFunctional/serial/ComponentHealth 0.2
91 TestFunctional/serial/LogsCmd 8.94
92 TestFunctional/serial/LogsFileCmd 11.24
93 TestFunctional/serial/InvalidService 22.17
99 TestFunctional/parallel/StatusCmd 44
103 TestFunctional/parallel/ServiceCmdConnect 27.35
104 TestFunctional/parallel/AddonsCmd 0.92
105 TestFunctional/parallel/PersistentVolumeClaim 68.32
107 TestFunctional/parallel/SSHCmd 19.91
108 TestFunctional/parallel/CpCmd 57.18
109 TestFunctional/parallel/MySQL 68.17
110 TestFunctional/parallel/FileSync 10.42
111 TestFunctional/parallel/CertSync 69.93
115 TestFunctional/parallel/NodeLabels 0.2
117 TestFunctional/parallel/NonActiveRuntimeDisabled 10.65
119 TestFunctional/parallel/License 3.5
120 TestFunctional/parallel/ServiceCmd/DeployApp 19.48
121 TestFunctional/parallel/ProfileCmd/profile_not_create 12.01
122 TestFunctional/parallel/Version/short 0.28
123 TestFunctional/parallel/Version/components 8.58
124 TestFunctional/parallel/ProfileCmd/profile_list 12.33
125 TestFunctional/parallel/ImageCommands/ImageListShort 7.97
126 TestFunctional/parallel/ImageCommands/ImageListTable 8.16
127 TestFunctional/parallel/ImageCommands/ImageListJson 8.05
128 TestFunctional/parallel/ImageCommands/ImageListYaml 7.97
129 TestFunctional/parallel/ImageCommands/ImageBuild 28.64
130 TestFunctional/parallel/ImageCommands/Setup 4.32
131 TestFunctional/parallel/ServiceCmd/List 15.29
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 27.59
133 TestFunctional/parallel/ProfileCmd/profile_json_output 11.95
134 TestFunctional/parallel/ServiceCmd/JSONOutput 14.64
135 TestFunctional/parallel/DockerEnv/powershell 50.56
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 23.7
139 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 30.01
141 TestFunctional/parallel/UpdateContextCmd/no_changes 2.67
142 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.65
143 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.68
144 TestFunctional/parallel/ImageCommands/ImageSaveToFile 10.26
145 TestFunctional/parallel/ImageCommands/ImageRemove 16.6
146 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 17.95
147 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 10.95
149 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 8.44
150 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
152 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 13.6
158 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
159 TestFunctional/delete_addon-resizer_images 0.51
160 TestFunctional/delete_my-image_image 0.18
161 TestFunctional/delete_minikube_cached_images 0.19
165 TestMultiControlPlane/serial/StartCluster 749.54
166 TestMultiControlPlane/serial/DeployApp 43.67
168 TestMultiControlPlane/serial/AddWorkerNode 262.94
169 TestMultiControlPlane/serial/NodeLabels 0.2
170 TestMultiControlPlane/serial/HAppyAfterClusterStart 29.76
174 TestImageBuild/serial/Setup 207.21
175 TestImageBuild/serial/NormalBuild 10.24
176 TestImageBuild/serial/BuildWithBuildArg 9.51
177 TestImageBuild/serial/BuildWithDockerIgnore 8.19
178 TestImageBuild/serial/BuildWithSpecifiedDockerfile 7.99
182 TestJSONOutput/start/Command 252.34
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 8.32
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 8.2
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 40.87
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 1.8
210 TestMainNoArgs 0.49
211 TestMinikubeProfile 590.01
214 TestMountStart/serial/StartWithMountFirst 161.39
215 TestMountStart/serial/VerifyMountFirst 9.94
216 TestMountStart/serial/StartWithMountSecond 162.52
217 TestMountStart/serial/VerifyMountSecond 9.82
218 TestMountStart/serial/DeleteFirst 32.76
219 TestMountStart/serial/VerifyMountPostDelete 9.79
220 TestMountStart/serial/Stop 31.33
221 TestMountStart/serial/RestartStopped 122.46
222 TestMountStart/serial/VerifyMountPostStop 9.93
225 TestMultiNode/serial/FreshStart2Nodes 447.99
226 TestMultiNode/serial/DeployApp2Nodes 9.99
228 TestMultiNode/serial/AddNode 240.44
229 TestMultiNode/serial/MultiNodeLabels 0.2
230 TestMultiNode/serial/ProfileList 33.48
231 TestMultiNode/serial/CopyFile 377.8
232 TestMultiNode/serial/StopNode 79.85
233 TestMultiNode/serial/StartAfterStop 190.29
238 TestPreload 545.49
239 TestScheduledStopWindows 347.35
244 TestRunningBinaryUpgrade 1131.99
246 TestKubernetesUpgrade 1413.49
249 TestNoKubernetes/serial/StartNoK8sWithVersion 0.4
251 TestStoppedBinaryUpgrade/Setup 0.69
252 TestStoppedBinaryUpgrade/Upgrade 890.53
273 TestStoppedBinaryUpgrade/MinikubeLogs 11.5
x
+
TestDownloadOnly/v1.20.0/json-events (16.51s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-068600 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-068600 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv: (16.5135688s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (16.51s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-068600
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-068600: exit status 85 (418.5533ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-068600 | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:34 UTC |          |
	|         | -p download-only-068600        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 10:34:41
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 10:34:41.030000   10972 out.go:291] Setting OutFile to fd 604 ...
	I0318 10:34:41.031103   10972 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 10:34:41.031219   10972 out.go:304] Setting ErrFile to fd 608...
	I0318 10:34:41.031248   10972 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 10:34:41.044341   10972 root.go:314] Error reading config file at C:\Users\jenkins.minikube6\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube6\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0318 10:34:41.057152   10972 out.go:298] Setting JSON to true
	I0318 10:34:41.059492   10972 start.go:129] hostinfo: {"hostname":"minikube6","uptime":133405,"bootTime":1710624675,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0318 10:34:41.060493   10972 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 10:34:41.068292   10972 out.go:97] [download-only-068600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	I0318 10:34:41.069130   10972 notify.go:220] Checking for updates...
	I0318 10:34:41.071722   10972 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	W0318 10:34:41.069720   10972 preload.go:294] Failed to list preload files: open C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0318 10:34:41.074716   10972 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0318 10:34:41.077082   10972 out.go:169] MINIKUBE_LOCATION=18431
	I0318 10:34:41.080052   10972 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0318 10:34:41.084433   10972 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0318 10:34:41.084433   10972 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 10:34:46.934311   10972 out.go:97] Using the hyperv driver based on user configuration
	I0318 10:34:46.934311   10972 start.go:297] selected driver: hyperv
	I0318 10:34:46.934311   10972 start.go:901] validating driver "hyperv" against <nil>
	I0318 10:34:46.934311   10972 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 10:34:46.987972   10972 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0318 10:34:46.988872   10972 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 10:34:46.988872   10972 cni.go:84] Creating CNI manager for ""
	I0318 10:34:46.988872   10972 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0318 10:34:46.989451   10972 start.go:340] cluster config:
	{Name:download-only-068600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-068600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 10:34:46.990108   10972 iso.go:125] acquiring lock: {Name:mk859ea173f7c19f70b69d7017f4a5a661cd1500 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 10:34:46.993333   10972 out.go:97] Downloading VM boot image ...
	I0318 10:34:46.993518   10972 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.32.1-1710520390-17991-amd64.iso
	I0318 10:34:50.786090   10972 out.go:97] Starting "download-only-068600" primary control-plane node in "download-only-068600" cluster
	I0318 10:34:50.786691   10972 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0318 10:34:50.838454   10972 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0318 10:34:50.838454   10972 cache.go:56] Caching tarball of preloaded images
	I0318 10:34:50.838973   10972 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0318 10:34:50.842651   10972 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0318 10:34:50.842651   10972 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0318 10:34:50.907882   10972 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0318 10:34:54.442797   10972 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0318 10:34:54.444155   10972 preload.go:255] verifying checksum of C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0318 10:34:55.511528   10972 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0318 10:34:55.512603   10972 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-068600\config.json ...
	I0318 10:34:55.513299   10972 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-068600\config.json: {Name:mk1b48f08eccd41827b9206a6d16d5aa16dcb9d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 10:34:55.514786   10972 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0318 10:34:55.516200   10972 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\windows\amd64\v1.20.0/kubectl.exe
	
	
	* The control-plane node download-only-068600 host does not exist
	  To start a cluster, run: "minikube start -p download-only-068600"

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 10:34:57.544438    7724 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (1.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.5284335s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (1.53s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.54s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-068600
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-068600: (1.5419819s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.54s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (11.8s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-369300 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-369300 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=hyperv: (11.7950811s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (11.80s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-369300
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-369300: exit status 85 (341.8843ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-068600 | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:34 UTC |                     |
	|         | -p download-only-068600        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:34 UTC | 18 Mar 24 10:34 UTC |
	| delete  | -p download-only-068600        | download-only-068600 | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:34 UTC | 18 Mar 24 10:35 UTC |
	| start   | -o=json --download-only        | download-only-369300 | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:35 UTC |                     |
	|         | -p download-only-369300        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 10:35:01
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 10:35:01.112498   14036 out.go:291] Setting OutFile to fd 708 ...
	I0318 10:35:01.113497   14036 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 10:35:01.113497   14036 out.go:304] Setting ErrFile to fd 712...
	I0318 10:35:01.113497   14036 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 10:35:01.135498   14036 out.go:298] Setting JSON to true
	I0318 10:35:01.138502   14036 start.go:129] hostinfo: {"hostname":"minikube6","uptime":133425,"bootTime":1710624675,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0318 10:35:01.138502   14036 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 10:35:01.163500   14036 out.go:97] [download-only-369300] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	I0318 10:35:01.163994   14036 notify.go:220] Checking for updates...
	I0318 10:35:01.166127   14036 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0318 10:35:01.169162   14036 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0318 10:35:01.173515   14036 out.go:169] MINIKUBE_LOCATION=18431
	I0318 10:35:01.178235   14036 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0318 10:35:01.182518   14036 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0318 10:35:01.183757   14036 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 10:35:06.898013   14036 out.go:97] Using the hyperv driver based on user configuration
	I0318 10:35:06.898744   14036 start.go:297] selected driver: hyperv
	I0318 10:35:06.898744   14036 start.go:901] validating driver "hyperv" against <nil>
	I0318 10:35:06.898744   14036 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 10:35:06.948350   14036 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0318 10:35:06.950140   14036 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 10:35:06.950200   14036 cni.go:84] Creating CNI manager for ""
	I0318 10:35:06.950304   14036 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 10:35:06.950304   14036 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 10:35:06.950636   14036 start.go:340] cluster config:
	{Name:download-only-369300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-369300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 10:35:06.950636   14036 iso.go:125] acquiring lock: {Name:mk859ea173f7c19f70b69d7017f4a5a661cd1500 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 10:35:06.954140   14036 out.go:97] Starting "download-only-369300" primary control-plane node in "download-only-369300" cluster
	I0318 10:35:06.954140   14036 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 10:35:06.994484   14036 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0318 10:35:06.995194   14036 cache.go:56] Caching tarball of preloaded images
	I0318 10:35:06.995694   14036 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 10:35:06.998494   14036 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0318 10:35:06.998494   14036 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0318 10:35:07.068740   14036 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4?checksum=md5:7ebdea7754e21f51b865dbfc36b53b7d -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0318 10:35:10.655993   14036 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0318 10:35:10.656936   14036 preload.go:255] verifying checksum of C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-369300 host does not exist
	  To start a cluster, run: "minikube start -p download-only-369300"

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 10:35:12.833996   13728 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (1.72s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.7152462s)
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (1.72s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (1.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-369300
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-369300: (1.3962147s)
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (1.40s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (11.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-324500 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-324500 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=hyperv: (11.2704016s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (11.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-324500
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-324500: exit status 85 (314.9979ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-068600 | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:34 UTC |                     |
	|         | -p download-only-068600           |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr         |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |                   |         |                     |                     |
	|         | --container-runtime=docker        |                      |                   |         |                     |                     |
	|         | --driver=hyperv                   |                      |                   |         |                     |                     |
	| delete  | --all                             | minikube             | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:34 UTC | 18 Mar 24 10:34 UTC |
	| delete  | -p download-only-068600           | download-only-068600 | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:34 UTC | 18 Mar 24 10:35 UTC |
	| start   | -o=json --download-only           | download-only-369300 | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:35 UTC |                     |
	|         | -p download-only-369300           |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr         |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |                   |         |                     |                     |
	|         | --container-runtime=docker        |                      |                   |         |                     |                     |
	|         | --driver=hyperv                   |                      |                   |         |                     |                     |
	| delete  | --all                             | minikube             | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:35 UTC | 18 Mar 24 10:35 UTC |
	| delete  | -p download-only-369300           | download-only-369300 | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:35 UTC | 18 Mar 24 10:35 UTC |
	| start   | -o=json --download-only           | download-only-324500 | minikube6\jenkins | v1.32.0 | 18 Mar 24 10:35 UTC |                     |
	|         | -p download-only-324500           |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr         |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |                   |         |                     |                     |
	|         | --container-runtime=docker        |                      |                   |         |                     |                     |
	|         | --driver=hyperv                   |                      |                   |         |                     |                     |
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 10:35:16
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 10:35:16.365449    3840 out.go:291] Setting OutFile to fd 668 ...
	I0318 10:35:16.365449    3840 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 10:35:16.366476    3840 out.go:304] Setting ErrFile to fd 720...
	I0318 10:35:16.366476    3840 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 10:35:16.392903    3840 out.go:298] Setting JSON to true
	I0318 10:35:16.396219    3840 start.go:129] hostinfo: {"hostname":"minikube6","uptime":133440,"bootTime":1710624675,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0318 10:35:16.396373    3840 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 10:35:16.404754    3840 out.go:97] [download-only-324500] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	I0318 10:35:16.408242    3840 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0318 10:35:16.405890    3840 notify.go:220] Checking for updates...
	I0318 10:35:16.412885    3840 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0318 10:35:16.415481    3840 out.go:169] MINIKUBE_LOCATION=18431
	I0318 10:35:16.417821    3840 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0318 10:35:16.424541    3840 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0318 10:35:16.425649    3840 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 10:35:21.995553    3840 out.go:97] Using the hyperv driver based on user configuration
	I0318 10:35:21.995635    3840 start.go:297] selected driver: hyperv
	I0318 10:35:21.995700    3840 start.go:901] validating driver "hyperv" against <nil>
	I0318 10:35:21.995974    3840 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 10:35:22.045452    3840 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0318 10:35:22.046367    3840 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 10:35:22.046966    3840 cni.go:84] Creating CNI manager for ""
	I0318 10:35:22.047044    3840 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 10:35:22.047078    3840 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 10:35:22.047078    3840 start.go:340] cluster config:
	{Name:download-only-324500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-324500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterva
l:1m0s}
	I0318 10:35:22.047078    3840 iso.go:125] acquiring lock: {Name:mk859ea173f7c19f70b69d7017f4a5a661cd1500 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 10:35:22.050820    3840 out.go:97] Starting "download-only-324500" primary control-plane node in "download-only-324500" cluster
	I0318 10:35:22.050820    3840 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0318 10:35:22.098934    3840 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0318 10:35:22.098998    3840 cache.go:56] Caching tarball of preloaded images
	I0318 10:35:22.099359    3840 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0318 10:35:22.104008    3840 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0318 10:35:22.104008    3840 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0318 10:35:22.170971    3840 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4?checksum=md5:47acda482c3add5b56147c92b8d7f468 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0318 10:35:25.347990    3840 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0318 10:35:25.348670    3840 preload.go:255] verifying checksum of C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0318 10:35:26.295762    3840 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on docker
	I0318 10:35:26.296473    3840 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-324500\config.json ...
	I0318 10:35:26.297100    3840 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-324500\config.json: {Name:mk5c0e90a72d01a732d5880d2cd0228cf5e07874 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 10:35:26.298641    3840 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0318 10:35:26.298900    3840 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\windows\amd64\v1.29.0-rc.2/kubectl.exe
	
	
	* The control-plane node download-only-324500 host does not exist
	  To start a cluster, run: "minikube start -p download-only-324500"

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 10:35:27.560308    8640 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (1.47s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.4706147s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (1.47s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (1.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-324500
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-324500: (1.3694949s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (1.37s)

                                                
                                    
x
+
TestBinaryMirror (7.31s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-301800 --alsologtostderr --binary-mirror http://127.0.0.1:53180 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-301800 --alsologtostderr --binary-mirror http://127.0.0.1:53180 --driver=hyperv: (6.3784233s)
helpers_test.go:175: Cleaning up "binary-mirror-301800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-301800
--- PASS: TestBinaryMirror (7.31s)

                                                
                                    
x
+
TestOffline (425.09s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-148100 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-148100 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (6m16.0626394s)
helpers_test.go:175: Cleaning up "offline-docker-148100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-148100
E0318 13:09:49.645841    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-148100: (49.0245811s)
--- PASS: TestOffline (425.09s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.28s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-748800
addons_test.go:928: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-748800: exit status 85 (277.4899ms)

                                                
                                                
-- stdout --
	* Profile "addons-748800" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-748800"

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 10:35:42.282319    8376 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.28s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.29s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-748800
addons_test.go:939: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-748800: exit status 85 (285.9735ms)

                                                
                                                
-- stdout --
	* Profile "addons-748800" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-748800"

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 10:35:42.282319    5632 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.29s)

                                                
                                    
x
+
TestAddons/Setup (399.48s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-748800 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-748800 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller: (6m39.4781576s)
--- PASS: TestAddons/Setup (399.48s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (68.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-748800 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-748800 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-748800 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [281f01d5-82de-4aea-89c5-d075459099e7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [281f01d5-82de-4aea-89c5-d075459099e7] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.0242761s
addons_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-748800 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe -p addons-748800 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (10.4535856s)
addons_test.go:269: debug: unexpected stderr for out/minikube-windows-amd64.exe -p addons-748800 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W0318 10:44:00.497579   12296 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:286: (dbg) Run:  kubectl --context addons-748800 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-748800 ip
addons_test.go:291: (dbg) Done: out/minikube-windows-amd64.exe -p addons-748800 ip: (2.7250085s)
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 172.25.150.46
addons_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-748800 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-windows-amd64.exe -p addons-748800 addons disable ingress-dns --alsologtostderr -v=1: (16.473488s)
addons_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-748800 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe -p addons-748800 addons disable ingress --alsologtostderr -v=1: (22.1878095s)
--- PASS: TestAddons/parallel/Ingress (68.13s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (27.6s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-w4zxv" [6d936709-220d-40fd-9625-b4e0a1368d67] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.0212773s
addons_test.go:841: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-748800
addons_test.go:841: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-748800: (21.5783551s)
--- PASS: TestAddons/parallel/InspektorGadget (27.60s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (21.7s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 9.0188ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-z7ml8" [9ca72841-3a3f-4495-be4a-eaa6cfc05271] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0143892s
addons_test.go:415: (dbg) Run:  kubectl --context addons-748800 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-748800 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-windows-amd64.exe -p addons-748800 addons disable metrics-server --alsologtostderr -v=1: (16.4610724s)
--- PASS: TestAddons/parallel/MetricsServer (21.70s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (33.07s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 4.9681ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-f52g9" [687334c7-33e3-453f-8f41-bad41c523ac2] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0209213s
addons_test.go:473: (dbg) Run:  kubectl --context addons-748800 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-748800 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.8986983s)
addons_test.go:478: kubectl --context addons-748800 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:473: (dbg) Run:  kubectl --context addons-748800 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-748800 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.5496369s)
addons_test.go:490: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-748800 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:490: (dbg) Done: out/minikube-windows-amd64.exe -p addons-748800 addons disable helm-tiller --alsologtostderr -v=1: (15.9422919s)
--- PASS: TestAddons/parallel/HelmTiller (33.07s)

                                                
                                    
x
+
TestAddons/parallel/CSI (95.19s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 25.0126ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-748800 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748800 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-748800 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [fa61f675-c413-4b6b-b51a-5b4a6c5bc02c] Pending
helpers_test.go:344: "task-pv-pod" [fa61f675-c413-4b6b-b51a-5b4a6c5bc02c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [fa61f675-c413-4b6b-b51a-5b4a6c5bc02c] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 24.0270701s
addons_test.go:584: (dbg) Run:  kubectl --context addons-748800 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-748800 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-748800 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-748800 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-748800 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-748800 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-748800 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [d437966e-b607-4c11-a40d-4ddfd4c1200d] Pending
helpers_test.go:344: "task-pv-pod-restore" [d437966e-b607-4c11-a40d-4ddfd4c1200d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [d437966e-b607-4c11-a40d-4ddfd4c1200d] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.0164152s
addons_test.go:626: (dbg) Run:  kubectl --context addons-748800 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-748800 delete pod task-pv-pod-restore: (1.5486656s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-748800 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-748800 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-748800 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-windows-amd64.exe -p addons-748800 addons disable csi-hostpath-driver --alsologtostderr -v=1: (22.6327617s)
addons_test.go:642: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-748800 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Done: out/minikube-windows-amd64.exe -p addons-748800 addons disable volumesnapshots --alsologtostderr -v=1: (16.4385154s)
--- PASS: TestAddons/parallel/CSI (95.19s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (37.33s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-748800 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-748800 --alsologtostderr -v=1: (18.3110452s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5485c556b-bp7fl" [503fcd64-7e33-4001-abc0-d79e24a3cd97] Pending
helpers_test.go:344: "headlamp-5485c556b-bp7fl" [503fcd64-7e33-4001-abc0-d79e24a3cd97] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5485c556b-bp7fl" [503fcd64-7e33-4001-abc0-d79e24a3cd97] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 19.0128635s
--- PASS: TestAddons/parallel/Headlamp (37.33s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (22.3s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5446596998-fmjq4" [6d034814-4994-4e85-b139-03ac8910f5e0] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.0163891s
addons_test.go:860: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-748800
addons_test.go:860: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-748800: (16.2656591s)
--- PASS: TestAddons/parallel/CloudSpanner (22.30s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (87.17s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-748800 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-748800 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748800 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [0e8e5e11-d7d9-40b5-a00d-9ad0eb379fa8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [0e8e5e11-d7d9-40b5-a00d-9ad0eb379fa8] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [0e8e5e11-d7d9-40b5-a00d-9ad0eb379fa8] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.0180011s
addons_test.go:891: (dbg) Run:  kubectl --context addons-748800 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-748800 ssh "cat /opt/local-path-provisioner/pvc-78236935-41c5-4ffc-887f-2d2dd3c5a049_default_test-pvc/file1"
addons_test.go:900: (dbg) Done: out/minikube-windows-amd64.exe -p addons-748800 ssh "cat /opt/local-path-provisioner/pvc-78236935-41c5-4ffc-887f-2d2dd3c5a049_default_test-pvc/file1": (10.994612s)
addons_test.go:912: (dbg) Run:  kubectl --context addons-748800 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-748800 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-748800 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-windows-amd64.exe -p addons-748800 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (1m2.4086879s)
--- PASS: TestAddons/parallel/LocalPath (87.17s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (22.44s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-7gh64" [13df6930-c649-4c69-899b-ead23cdccba1] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.0194784s
addons_test.go:955: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-748800
addons_test.go:955: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-748800: (17.4145011s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (22.44s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-vcxng" [2a8ab6a9-4bc5-43e2-b428-bf5b047aa9f5] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.0070788s
--- PASS: TestAddons/parallel/Yakd (6.02s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.37s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-748800 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-748800 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.37s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (55.27s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-748800
addons_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-748800: (42.1787196s)
addons_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-748800
addons_test.go:176: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-748800: (5.5958677s)
addons_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-748800
addons_test.go:180: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-748800: (4.7489176s)
addons_test.go:185: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-748800
addons_test.go:185: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-748800: (2.745109s)
--- PASS: TestAddons/StoppedEnableDisable (55.27s)

                                                
                                    
x
+
TestCertOptions (362.44s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-434300 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-434300 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv: (4m54.1197113s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-434300 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
E0318 13:27:21.994084    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-434300 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (11.7009023s)
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-434300 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-434300 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-434300 -- "sudo cat /etc/kubernetes/admin.conf": (11.3422926s)
helpers_test.go:175: Cleaning up "cert-options-434300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-434300
E0318 13:27:52.899813    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-434300: (45.059337s)
--- PASS: TestCertOptions (362.44s)

                                                
                                    
x
+
TestDockerFlags (440.84s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-891000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-891000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv: (6m9.6571397s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-891000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-891000 ssh "sudo systemctl show docker --property=Environment --no-pager": (10.7552037s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-891000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-891000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (10.5793992s)
helpers_test.go:175: Cleaning up "docker-flags-891000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-891000
E0318 13:14:49.643463    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-891000: (49.8475355s)
--- PASS: TestDockerFlags (440.84s)

                                                
                                    
x
+
TestForceSystemdFlag (275.88s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-148100 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-148100 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv: (3m35.2905723s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-148100 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-148100 ssh "docker info --format {{.CgroupDriver}}": (10.6413102s)
helpers_test.go:175: Cleaning up "force-systemd-flag-148100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-148100
E0318 13:07:22.002691    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-148100: (49.9493388s)
--- PASS: TestForceSystemdFlag (275.88s)

                                                
                                    
x
+
TestErrorSpam/start (18s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-013800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-013800 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-013800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-013800 start --dry-run: (5.9192445s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-013800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-013800 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-013800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-013800 start --dry-run: (6.0501722s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-013800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-013800 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-013800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-013800 start --dry-run: (6.0309593s)
--- PASS: TestErrorSpam/start (18.00s)

                                                
                                    
x
+
TestErrorSpam/status (37.84s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-013800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-013800 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-013800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-013800 status: (13.0391209s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-013800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-013800 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-013800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-013800 status: (12.4071042s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-013800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-013800 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-013800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-013800 status: (12.3854197s)
--- PASS: TestErrorSpam/status (37.84s)

                                                
                                    
x
+
TestErrorSpam/pause (23.79s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-013800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-013800 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-013800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-013800 pause: (8.1606881s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-013800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-013800 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-013800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-013800 pause: (7.8507676s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-013800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-013800 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-013800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-013800 pause: (7.7782476s)
--- PASS: TestErrorSpam/pause (23.79s)

                                                
                                    
x
+
TestErrorSpam/unpause (23.76s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-013800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-013800 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-013800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-013800 unpause: (7.9836597s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-013800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-013800 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-013800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-013800 unpause: (7.883862s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-013800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-013800 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-013800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-013800 unpause: (7.8902396s)
--- PASS: TestErrorSpam/unpause (23.76s)

                                                
                                    
x
+
TestErrorSpam/stop (63.43s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-013800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-013800 stop
E0318 10:52:21.939007    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
E0318 10:52:49.751889    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-013800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-013800 stop: (40.7551886s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-013800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-013800 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-013800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-013800 stop: (11.5694246s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-013800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-013800 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-013800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-013800 stop: (11.1001602s)
--- PASS: TestErrorSpam/stop (63.43s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9120\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (247.3s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-499500 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
E0318 10:57:21.942903    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
functional_test.go:2230: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-499500 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (4m7.2940107s)
--- PASS: TestFunctional/serial/StartWithProxy (247.30s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (129.18s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-499500 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-499500 --alsologtostderr -v=8: (2m9.1804547s)
functional_test.go:659: soft start took 2m9.1815846s for "functional-499500" cluster.
--- PASS: TestFunctional/serial/SoftStart (129.18s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.13s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-499500 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (27.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 cache add registry.k8s.io/pause:3.1: (9.1182218s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 cache add registry.k8s.io/pause:3.3: (8.9663668s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 cache add registry.k8s.io/pause:latest: (9.1083789s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (27.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (12.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-499500 C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local363326962\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-499500 C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local363326962\001: (3.2944049s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 cache add minikube-local-cache-test:functional-499500
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 cache add minikube-local-cache-test:functional-499500: (8.3706426s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 cache delete minikube-local-cache-test:functional-499500
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-499500
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (12.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.74s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 ssh sudo crictl images
functional_test.go:1120: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 ssh sudo crictl images: (9.7410437s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.74s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (37.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 ssh sudo docker rmi registry.k8s.io/pause:latest: (9.7253768s)
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-499500 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (9.6429465s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 11:00:51.864043   14088 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 cache reload: (8.4007137s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (9.7237443s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (37.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.57s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 kubectl -- --context functional-499500 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.48s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (130.71s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-499500 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0318 11:02:21.941810    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
E0318 11:03:45.130802    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
functional_test.go:753: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-499500 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (2m10.7073633s)
functional_test.go:757: restart took 2m10.7073633s for "functional-499500" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (130.71s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.2s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-499500 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.20s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (8.94s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 logs
functional_test.go:1232: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 logs: (8.9411699s)
--- PASS: TestFunctional/serial/LogsCmd (8.94s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (11.24s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 logs --file C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialLogsFileCmd1873243909\001\logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 logs --file C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialLogsFileCmd1873243909\001\logs.txt: (11.2335332s)
--- PASS: TestFunctional/serial/LogsFileCmd (11.24s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (22.17s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-499500 apply -f testdata\invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-499500
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-499500: exit status 115 (17.4277943s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://172.25.151.65:31966 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 11:04:30.485404    4484 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_service_8fb87d8e79e761d215f3221b4a4d8a6300edfb06_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-499500 delete -f testdata\invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-499500 delete -f testdata\invalidsvc.yaml: (1.3110549s)
--- PASS: TestFunctional/serial/InvalidService (22.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (44s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 status
functional_test.go:850: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 status: (14.1689739s)
functional_test.go:856: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (15.4323966s)
functional_test.go:868: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 status -o json
functional_test.go:868: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 status -o json: (14.399991s)
--- PASS: TestFunctional/parallel/StatusCmd (44.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (27.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-499500 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-499500 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-mtz2x" [eba395aa-d78c-48c2-8804-f3f6261bac01] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-mtz2x" [eba395aa-d78c-48c2-8804-f3f6261bac01] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.0228309s
functional_test.go:1645: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 service hello-node-connect --url
functional_test.go:1645: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 service hello-node-connect --url: (18.8375053s)
functional_test.go:1651: found endpoint for hello-node-connect: http://172.25.151.65:31559
functional_test.go:1671: http://172.25.151.65:31559: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-mtz2x

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://172.25.151.65:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=172.25.151.65:31559
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (27.35s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (68.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [8c751c63-0f7c-44a2-a6ae-f56aca9513fd] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0172155s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-499500 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-499500 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-499500 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-499500 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [568d1ef1-3b53-49cb-99bf-9667f861e48e] Pending
helpers_test.go:344: "sp-pod" [568d1ef1-3b53-49cb-99bf-9667f861e48e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [568d1ef1-3b53-49cb-99bf-9667f861e48e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 50.0139282s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-499500 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-499500 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-499500 delete -f testdata/storage-provisioner/pod.yaml: (2.117613s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-499500 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [770e1534-6b11-4590-998a-d51879784e87] Pending
helpers_test.go:344: "sp-pod" [770e1534-6b11-4590-998a-d51879784e87] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [770e1534-6b11-4590-998a-d51879784e87] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.0136662s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-499500 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (68.32s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (19.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 ssh "echo hello"
functional_test.go:1721: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 ssh "echo hello": (9.7687965s)
functional_test.go:1738: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 ssh "cat /etc/hostname": (10.1349985s)
--- PASS: TestFunctional/parallel/SSHCmd (19.91s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (57.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 cp testdata\cp-test.txt /home/docker/cp-test.txt: (8.0309257s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 ssh -n functional-499500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 ssh -n functional-499500 "sudo cat /home/docker/cp-test.txt": (10.2965657s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 cp functional-499500:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalparallelCpCmd1377832470\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 cp functional-499500:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalparallelCpCmd1377832470\001\cp-test.txt: (10.4024734s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 ssh -n functional-499500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 ssh -n functional-499500 "sudo cat /home/docker/cp-test.txt": (10.4088919s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (8.0973437s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 ssh -n functional-499500 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 ssh -n functional-499500 "sudo cat /tmp/does/not/exist/cp-test.txt": (9.9317329s)
--- PASS: TestFunctional/parallel/CpCmd (57.18s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (68.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-499500 replace --force -f testdata\mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-bfzjv" [da7abb74-03df-4a87-b7af-fdc51f5e7700] Pending
helpers_test.go:344: "mysql-859648c796-bfzjv" [da7abb74-03df-4a87-b7af-fdc51f5e7700] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-bfzjv" [da7abb74-03df-4a87-b7af-fdc51f5e7700] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 49.2578316s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-499500 exec mysql-859648c796-bfzjv -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-499500 exec mysql-859648c796-bfzjv -- mysql -ppassword -e "show databases;": exit status 1 (402.4201ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-499500 exec mysql-859648c796-bfzjv -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-499500 exec mysql-859648c796-bfzjv -- mysql -ppassword -e "show databases;": exit status 1 (345.1293ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-499500 exec mysql-859648c796-bfzjv -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-499500 exec mysql-859648c796-bfzjv -- mysql -ppassword -e "show databases;": exit status 1 (344.8755ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-499500 exec mysql-859648c796-bfzjv -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-499500 exec mysql-859648c796-bfzjv -- mysql -ppassword -e "show databases;": exit status 1 (434.2911ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-499500 exec mysql-859648c796-bfzjv -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-499500 exec mysql-859648c796-bfzjv -- mysql -ppassword -e "show databases;": exit status 1 (763.8334ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-499500 exec mysql-859648c796-bfzjv -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-499500 exec mysql-859648c796-bfzjv -- mysql -ppassword -e "show databases;": exit status 1 (312.2494ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-499500 exec mysql-859648c796-bfzjv -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (68.17s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (10.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/9120/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 ssh "sudo cat /etc/test/nested/copy/9120/hosts"
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 ssh "sudo cat /etc/test/nested/copy/9120/hosts": (10.4172466s)
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (10.42s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (69.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/9120.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 ssh "sudo cat /etc/ssl/certs/9120.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 ssh "sudo cat /etc/ssl/certs/9120.pem": (11.9101299s)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/9120.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 ssh "sudo cat /usr/share/ca-certificates/9120.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 ssh "sudo cat /usr/share/ca-certificates/9120.pem": (11.8247779s)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 ssh "sudo cat /etc/ssl/certs/51391683.0": (12.0894123s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/91202.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 ssh "sudo cat /etc/ssl/certs/91202.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 ssh "sudo cat /etc/ssl/certs/91202.pem": (11.7213589s)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/91202.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 ssh "sudo cat /usr/share/ca-certificates/91202.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 ssh "sudo cat /usr/share/ca-certificates/91202.pem": (11.5107241s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (10.8665395s)
--- PASS: TestFunctional/parallel/CertSync (69.93s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-499500 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (10.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-499500 ssh "sudo systemctl is-active crio": exit status 1 (10.6535942s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 11:04:54.588081    6584 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (10.65s)

                                                
                                    
x
+
TestFunctional/parallel/License (3.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2284: (dbg) Done: out/minikube-windows-amd64.exe license: (3.4929246s)
--- PASS: TestFunctional/parallel/License (3.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (19.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-499500 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-499500 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-6x7fv" [ad5e03d0-3b7e-4f22-b958-b9893d3b8ff5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-6x7fv" [ad5e03d0-3b7e-4f22-b958-b9893d3b8ff5] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 19.009433s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (19.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (12.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1271: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (11.5026501s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (12.01s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 version --short
--- PASS: TestFunctional/parallel/Version/short (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (8.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 version -o=json --components: (8.5739569s)
--- PASS: TestFunctional/parallel/Version/components (8.58s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (12.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1306: (dbg) Done: out/minikube-windows-amd64.exe profile list: (12.0511964s)
functional_test.go:1311: Took "12.0522337s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1325: Took "276.9474ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (12.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (7.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 image ls --format short --alsologtostderr: (7.9662905s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-499500 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-499500
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-499500
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-499500 image ls --format short --alsologtostderr:
W0318 11:08:27.785746    7692 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0318 11:08:27.869745    7692 out.go:291] Setting OutFile to fd 1044 ...
I0318 11:08:27.870744    7692 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 11:08:27.870744    7692 out.go:304] Setting ErrFile to fd 1048...
I0318 11:08:27.870744    7692 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 11:08:27.886746    7692 config.go:182] Loaded profile config "functional-499500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 11:08:27.887748    7692 config.go:182] Loaded profile config "functional-499500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 11:08:27.888746    7692 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-499500 ).state
I0318 11:08:30.222370    7692 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0318 11:08:30.222439    7692 main.go:141] libmachine: [stderr =====>] : 
I0318 11:08:30.236470    7692 ssh_runner.go:195] Run: systemctl --version
I0318 11:08:30.236470    7692 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-499500 ).state
I0318 11:08:32.607932    7692 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0318 11:08:32.608100    7692 main.go:141] libmachine: [stderr =====>] : 
I0318 11:08:32.608197    7692 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-499500 ).networkadapters[0]).ipaddresses[0]
I0318 11:08:35.438810    7692 main.go:141] libmachine: [stdout =====>] : 172.25.151.65

                                                
                                                
I0318 11:08:35.438810    7692 main.go:141] libmachine: [stderr =====>] : 
I0318 11:08:35.438810    7692 sshutil.go:53] new ssh client: &{IP:172.25.151.65 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-499500\id_rsa Username:docker}
I0318 11:08:35.535907    7692 ssh_runner.go:235] Completed: systemctl --version: (5.2993141s)
I0318 11:08:35.550340    7692 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (7.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (8.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 image ls --format table --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 image ls --format table --alsologtostderr: (8.1613244s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-499500 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| gcr.io/google-containers/addon-resizer      | functional-499500 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/library/minikube-local-cache-test | functional-499500 | 1a28d95250718 | 30B    |
| docker.io/library/nginx                     | latest            | 92b11f67642b6 | 187MB  |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | d058aa5ab969c | 122MB  |
| registry.k8s.io/kube-scheduler              | v1.28.4           | e3db313c6dbc0 | 60.1MB |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 7fe0e6f37db33 | 126MB  |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 83f6cc407eed8 | 73.2MB |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| docker.io/library/nginx                     | alpine            | e289a478ace02 | 42.6MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-499500 image ls --format table --alsologtostderr:
W0318 11:08:46.061398    8864 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0318 11:08:46.166110    8864 out.go:291] Setting OutFile to fd 940 ...
I0318 11:08:46.166110    8864 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 11:08:46.166110    8864 out.go:304] Setting ErrFile to fd 880...
I0318 11:08:46.166110    8864 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 11:08:46.184110    8864 config.go:182] Loaded profile config "functional-499500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 11:08:46.185109    8864 config.go:182] Loaded profile config "functional-499500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 11:08:46.186110    8864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-499500 ).state
I0318 11:08:48.555474    8864 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0318 11:08:48.555474    8864 main.go:141] libmachine: [stderr =====>] : 
I0318 11:08:48.573282    8864 ssh_runner.go:195] Run: systemctl --version
I0318 11:08:48.573973    8864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-499500 ).state
I0318 11:08:50.941396    8864 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0318 11:08:50.941396    8864 main.go:141] libmachine: [stderr =====>] : 
I0318 11:08:50.941643    8864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-499500 ).networkadapters[0]).ipaddresses[0]
I0318 11:08:53.875156    8864 main.go:141] libmachine: [stdout =====>] : 172.25.151.65

                                                
                                                
I0318 11:08:53.875225    8864 main.go:141] libmachine: [stderr =====>] : 
I0318 11:08:53.875225    8864 sshutil.go:53] new ssh client: &{IP:172.25.151.65 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-499500\id_rsa Username:docker}
I0318 11:08:53.985111    8864 ssh_runner.go:235] Completed: systemctl --version: (5.4117956s)
I0318 11:08:53.997133    8864 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (8.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (8.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 image ls --format json --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 image ls --format json --alsologtostderr: (8.0496703s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-499500 image ls --format json --alsologtostderr:
[{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"73200000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-499500"],"size":"32900000"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"126000000"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"60100000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"5107333e08a87b836d4
8ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"122000000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"1a28d95250718a03a671f87056173283824c58af053050690ddd9acd473f4fc0","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-499500"],"size":"30"},{"id":"e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608","repoDigests":[],"
repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"},{"id":"92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-499500 image ls --format json --alsologtostderr:
W0318 11:08:38.006905    8784 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0318 11:08:38.099742    8784 out.go:291] Setting OutFile to fd 940 ...
I0318 11:08:38.100742    8784 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 11:08:38.100742    8784 out.go:304] Setting ErrFile to fd 880...
I0318 11:08:38.100742    8784 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 11:08:38.128554    8784 config.go:182] Loaded profile config "functional-499500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 11:08:38.129553    8784 config.go:182] Loaded profile config "functional-499500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 11:08:38.130557    8784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-499500 ).state
I0318 11:08:40.529829    8784 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0318 11:08:40.530876    8784 main.go:141] libmachine: [stderr =====>] : 
I0318 11:08:40.547358    8784 ssh_runner.go:195] Run: systemctl --version
I0318 11:08:40.547892    8784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-499500 ).state
I0318 11:08:42.923863    8784 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0318 11:08:42.924070    8784 main.go:141] libmachine: [stderr =====>] : 
I0318 11:08:42.924141    8784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-499500 ).networkadapters[0]).ipaddresses[0]
I0318 11:08:45.737410    8784 main.go:141] libmachine: [stdout =====>] : 172.25.151.65

                                                
                                                
I0318 11:08:45.737410    8784 main.go:141] libmachine: [stderr =====>] : 
I0318 11:08:45.737943    8784 sshutil.go:53] new ssh client: &{IP:172.25.151.65 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-499500\id_rsa Username:docker}
I0318 11:08:45.835883    8784 ssh_runner.go:235] Completed: systemctl --version: (5.2884921s)
I0318 11:08:45.847490    8784 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (8.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (7.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 image ls --format yaml --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 image ls --format yaml --alsologtostderr: (7.9650834s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-499500 image ls --format yaml --alsologtostderr:
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-499500
size: "32900000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 1a28d95250718a03a671f87056173283824c58af053050690ddd9acd473f4fc0
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-499500
size: "30"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "122000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "73200000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "60100000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "126000000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-499500 image ls --format yaml --alsologtostderr:
W0318 11:08:30.033855    3844 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0318 11:08:30.129853    3844 out.go:291] Setting OutFile to fd 576 ...
I0318 11:08:30.142857    3844 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 11:08:30.142857    3844 out.go:304] Setting ErrFile to fd 940...
I0318 11:08:30.142857    3844 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 11:08:30.162762    3844 config.go:182] Loaded profile config "functional-499500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 11:08:30.163292    3844 config.go:182] Loaded profile config "functional-499500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 11:08:30.164041    3844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-499500 ).state
I0318 11:08:32.539531    3844 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0318 11:08:32.539889    3844 main.go:141] libmachine: [stderr =====>] : 
I0318 11:08:32.553611    3844 ssh_runner.go:195] Run: systemctl --version
I0318 11:08:32.553611    3844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-499500 ).state
I0318 11:08:34.908038    3844 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0318 11:08:34.908038    3844 main.go:141] libmachine: [stderr =====>] : 
I0318 11:08:34.908038    3844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-499500 ).networkadapters[0]).ipaddresses[0]
I0318 11:08:37.675393    3844 main.go:141] libmachine: [stdout =====>] : 172.25.151.65

                                                
                                                
I0318 11:08:37.675469    3844 main.go:141] libmachine: [stderr =====>] : 
I0318 11:08:37.675469    3844 sshutil.go:53] new ssh client: &{IP:172.25.151.65 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-499500\id_rsa Username:docker}
I0318 11:08:37.785701    3844 ssh_runner.go:235] Completed: systemctl --version: (5.2320579s)
I0318 11:08:37.796256    3844 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (7.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (28.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-499500 ssh pgrep buildkitd: exit status 1 (10.3761657s)

                                                
                                                
** stderr ** 
	W0318 11:08:35.751828    3892 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 image build -t localhost/my-image:functional-499500 testdata\build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 image build -t localhost/my-image:functional-499500 testdata\build --alsologtostderr: (10.6110487s)
functional_test.go:319: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-499500 image build -t localhost/my-image:functional-499500 testdata\build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 8e2019866811
---> Removed intermediate container 8e2019866811
---> ed11d6680d1b
Step 3/3 : ADD content.txt /
---> ce94d6a41d92
Successfully built ce94d6a41d92
Successfully tagged localhost/my-image:functional-499500
functional_test.go:322: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-499500 image build -t localhost/my-image:functional-499500 testdata\build --alsologtostderr:
W0318 11:08:46.124538   12752 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0318 11:08:46.211113   12752 out.go:291] Setting OutFile to fd 1144 ...
I0318 11:08:46.229909   12752 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 11:08:46.229976   12752 out.go:304] Setting ErrFile to fd 1148...
I0318 11:08:46.230059   12752 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 11:08:46.247252   12752 config.go:182] Loaded profile config "functional-499500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 11:08:46.263314   12752 config.go:182] Loaded profile config "functional-499500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 11:08:46.265187   12752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-499500 ).state
I0318 11:08:48.685343   12752 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0318 11:08:48.685429   12752 main.go:141] libmachine: [stderr =====>] : 
I0318 11:08:48.698409   12752 ssh_runner.go:195] Run: systemctl --version
I0318 11:08:48.699069   12752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-499500 ).state
I0318 11:08:51.093747   12752 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0318 11:08:51.093830   12752 main.go:141] libmachine: [stderr =====>] : 
I0318 11:08:51.093907   12752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-499500 ).networkadapters[0]).ipaddresses[0]
I0318 11:08:53.991379   12752 main.go:141] libmachine: [stdout =====>] : 172.25.151.65

                                                
                                                
I0318 11:08:53.991379   12752 main.go:141] libmachine: [stderr =====>] : 
I0318 11:08:53.992123   12752 sshutil.go:53] new ssh client: &{IP:172.25.151.65 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-499500\id_rsa Username:docker}
I0318 11:08:54.109553   12752 ssh_runner.go:235] Completed: systemctl --version: (5.4105735s)
I0318 11:08:54.109723   12752 build_images.go:161] Building image from path: C:\Users\jenkins.minikube6\AppData\Local\Temp\build.1691164851.tar
I0318 11:08:54.127118   12752 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0318 11:08:54.167122   12752 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1691164851.tar
I0318 11:08:54.175214   12752 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1691164851.tar: stat -c "%s %y" /var/lib/minikube/build/build.1691164851.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1691164851.tar': No such file or directory
I0318 11:08:54.175364   12752 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\AppData\Local\Temp\build.1691164851.tar --> /var/lib/minikube/build/build.1691164851.tar (3072 bytes)
I0318 11:08:54.241167   12752 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1691164851
I0318 11:08:54.274961   12752 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1691164851 -xf /var/lib/minikube/build/build.1691164851.tar
I0318 11:08:54.295234   12752 docker.go:360] Building image: /var/lib/minikube/build/build.1691164851
I0318 11:08:54.305623   12752 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-499500 /var/lib/minikube/build/build.1691164851
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0318 11:08:56.495640   12752 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-499500 /var/lib/minikube/build/build.1691164851: (2.1899414s)
I0318 11:08:56.514365   12752 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1691164851
I0318 11:08:56.554008   12752 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1691164851.tar
I0318 11:08:56.574059   12752 build_images.go:217] Built localhost/my-image:functional-499500 from C:\Users\jenkins.minikube6\AppData\Local\Temp\build.1691164851.tar
I0318 11:08:56.574208   12752 build_images.go:133] succeeded building to: functional-499500
I0318 11:08:56.574333   12752 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 image ls: (7.6479327s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (28.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (4.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (4.0419214s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-499500
--- PASS: TestFunctional/parallel/ImageCommands/Setup (4.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (15.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 service list
functional_test.go:1455: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 service list: (15.2916787s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (15.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (27.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 image load --daemon gcr.io/google-containers/addon-resizer:functional-499500 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 image load --daemon gcr.io/google-containers/addon-resizer:functional-499500 --alsologtostderr: (18.1622336s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 image ls: (9.4293838s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (27.59s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (11.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1357: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (11.6796411s)
functional_test.go:1362: Took "11.6805972s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1375: Took "271.3541ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (11.95s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (14.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 service list -o json: (14.6382361s)
functional_test.go:1490: Took "14.6382361s" to run "out/minikube-windows-amd64.exe -p functional-499500 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (14.64s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (50.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:495: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-499500 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-499500"
functional_test.go:495: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-499500 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-499500": (33.6382955s)
functional_test.go:518: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-499500 docker-env | Invoke-Expression ; docker images"
functional_test.go:518: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-499500 docker-env | Invoke-Expression ; docker images": (16.9034528s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (50.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (23.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 image load --daemon gcr.io/google-containers/addon-resizer:functional-499500 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 image load --daemon gcr.io/google-containers/addon-resizer:functional-499500 --alsologtostderr: (14.4452801s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 image ls: (9.2589004s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (23.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (30.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (3.942293s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-499500
functional_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 image load --daemon gcr.io/google-containers/addon-resizer:functional-499500 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 image load --daemon gcr.io/google-containers/addon-resizer:functional-499500 --alsologtostderr: (16.9394599s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 image ls: (8.8595156s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (30.01s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (2.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 update-context --alsologtostderr -v=2: (2.6672311s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (2.67s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 update-context --alsologtostderr -v=2: (2.6516271s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.65s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 update-context --alsologtostderr -v=2: (2.6740213s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (10.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 image save gcr.io/google-containers/addon-resizer:functional-499500 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 image save gcr.io/google-containers/addon-resizer:functional-499500 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (10.2569069s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (10.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (16.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 image rm gcr.io/google-containers/addon-resizer:functional-499500 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 image rm gcr.io/google-containers/addon-resizer:functional-499500 --alsologtostderr: (8.191374s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 image ls: (8.4031535s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (16.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (17.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (10.308693s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 image ls: (7.64464s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (17.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (10.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-499500
functional_test.go:423: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-499500 image save --daemon gcr.io/google-containers/addon-resizer:functional-499500 --alsologtostderr
E0318 11:07:21.948176    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
functional_test.go:423: (dbg) Done: out/minikube-windows-amd64.exe -p functional-499500 image save --daemon gcr.io/google-containers/addon-resizer:functional-499500 --alsologtostderr: (10.5498143s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-499500
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (10.95s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (8.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-499500 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-499500 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-499500 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 5728: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 3356: TerminateProcess: Access is denied.
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-499500 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (8.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-499500 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-499500 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [409e9111-7c2e-40bd-a256-61bc1757bfaf] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [409e9111-7c2e-40bd-a256-61bc1757bfaf] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 13.0220439s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-499500 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 7968: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.51s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-499500
--- PASS: TestFunctional/delete_addon-resizer_images (0.51s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.18s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-499500
--- PASS: TestFunctional/delete_my-image_image (0.18s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.19s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-499500
--- PASS: TestFunctional/delete_minikube_cached_images (0.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (749.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-606900 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv
E0318 11:14:49.600954    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
E0318 11:14:49.615400    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
E0318 11:14:49.630777    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
E0318 11:14:49.662069    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
E0318 11:14:49.709286    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
E0318 11:14:49.803425    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
E0318 11:14:49.978463    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
E0318 11:14:50.311953    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
E0318 11:14:50.962392    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
E0318 11:14:52.249407    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
E0318 11:14:54.823877    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
E0318 11:14:59.948414    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
E0318 11:15:10.191797    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
E0318 11:15:30.681204    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
E0318 11:16:11.643735    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
E0318 11:17:21.960890    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
E0318 11:17:33.570122    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
E0318 11:19:49.608553    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
E0318 11:20:17.418624    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
E0318 11:20:25.142142    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
E0318 11:22:21.952516    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-606900 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv: (11m51.7945268s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 status -v=7 --alsologtostderr
E0318 11:24:49.605086    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 status -v=7 --alsologtostderr: (37.7468387s)
--- PASS: TestMultiControlPlane/serial/StartCluster (749.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (43.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-606900 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-606900 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-606900 -- rollout status deployment/busybox: (4.2648188s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-606900 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.2.2 10.244.0.4 10.244.1.3'\n\n-- /stdout --\n** stderr ** \n\tW0318 11:25:10.286005    8628 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-606900 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.2.2 10.244.0.4 10.244.1.3'\n\n-- /stdout --\n** stderr ** \n\tW0318 11:25:11.368966    3592 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-606900 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.2.2 10.244.0.4 10.244.1.3'\n\n-- /stdout --\n** stderr ** \n\tW0318 11:25:13.969041    8028 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-606900 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.2.2 10.244.0.4 10.244.1.3'\n\n-- /stdout --\n** stderr ** \n\tW0318 11:25:16.772402    9144 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-606900 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.2.2 10.244.0.4 10.244.1.3'\n\n-- /stdout --\n** stderr ** \n\tW0318 11:25:21.432536   12384 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-606900 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.2.2 10.244.0.4 10.244.1.3'\n\n-- /stdout --\n** stderr ** \n\tW0318 11:25:26.376542   13360 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-606900 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.2.2 10.244.0.4 10.244.1.3'\n\n-- /stdout --\n** stderr ** \n\tW0318 11:25:31.329641    7592 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-606900 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-606900 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-606900 -- exec busybox-5b5d89c9d6-bsmjb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-606900 -- exec busybox-5b5d89c9d6-bsmjb -- nslookup kubernetes.io: (1.7810303s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-606900 -- exec busybox-5b5d89c9d6-cqzzh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-606900 -- exec busybox-5b5d89c9d6-cqzzh -- nslookup kubernetes.io: (1.6655194s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-606900 -- exec busybox-5b5d89c9d6-qdlmz -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-606900 -- exec busybox-5b5d89c9d6-bsmjb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-606900 -- exec busybox-5b5d89c9d6-cqzzh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-606900 -- exec busybox-5b5d89c9d6-qdlmz -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-606900 -- exec busybox-5b5d89c9d6-bsmjb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-606900 -- exec busybox-5b5d89c9d6-cqzzh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-606900 -- exec busybox-5b5d89c9d6-qdlmz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (43.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (262.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-606900 -v=7 --alsologtostderr
E0318 11:27:21.951717    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
E0318 11:29:49.599029    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-606900 -v=7 --alsologtostderr: (3m32.4385714s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-606900 status -v=7 --alsologtostderr
E0318 11:31:12.796596    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-606900 status -v=7 --alsologtostderr: (50.4979021s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (262.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-606900 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (29.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (29.7586868s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (29.76s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (207.21s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-427800 --driver=hyperv
E0318 11:47:21.964105    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
E0318 11:47:52.816405    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-427800 --driver=hyperv: (3m27.2065561s)
--- PASS: TestImageBuild/serial/Setup (207.21s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (10.24s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-427800
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-427800: (10.2383914s)
--- PASS: TestImageBuild/serial/NormalBuild (10.24s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (9.51s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-427800
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-427800: (9.5127802s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (9.51s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (8.19s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-427800
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-427800: (8.1870967s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (8.19s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.99s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-427800
E0318 11:49:49.610065    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-427800: (7.9863081s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.99s)

                                                
                                    
x
+
TestJSONOutput/start/Command (252.34s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-790200 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E0318 11:52:21.961424    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
E0318 11:53:45.169901    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-790200 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (4m12.3418838s)
--- PASS: TestJSONOutput/start/Command (252.34s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (8.32s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-790200 --output=json --user=testUser
E0318 11:54:49.606853    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-790200 --output=json --user=testUser: (8.3139163s)
--- PASS: TestJSONOutput/pause/Command (8.32s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (8.2s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-790200 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-790200 --output=json --user=testUser: (8.2022951s)
--- PASS: TestJSONOutput/unpause/Command (8.20s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (40.87s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-790200 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-790200 --output=json --user=testUser: (40.8731902s)
--- PASS: TestJSONOutput/stop/Command (40.87s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.8s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-209000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-209000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (308.5471ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4e0fda08-7187-48ce-955f-9a06c6fb2e7c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-209000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"10d0e5a8-3d39-4383-b5fc-488b85af0a69","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube6\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"d63d1a88-cc4c-4141-a109-3a96d6dbcc96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"469e9244-d8e2-40bb-a1f0-73ac372a1211","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"76933fcb-40c5-4211-b58e-45b545c998ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18431"}}
	{"specversion":"1.0","id":"851aa4ab-6db0-46fd-893d-7263451d1e43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4672736a-8d14-4b5c-8906-a1841935ceb7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 11:56:01.143353   13292 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-209000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-209000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-209000: (1.4864758s)
--- PASS: TestErrorJSONOutput (1.80s)

                                                
                                    
x
+
TestMainNoArgs (0.49s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.49s)

                                                
                                    
x
+
TestMinikubeProfile (590.01s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-017200 --driver=hyperv
E0318 11:57:21.969069    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-017200 --driver=hyperv: (3m26.9409255s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-017200 --driver=hyperv
E0318 11:59:49.623545    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
E0318 12:02:21.969782    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-017200 --driver=hyperv: (3m34.6914287s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-017200
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (46.0207262s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-017200
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (29.3482677s)
helpers_test.go:175: Cleaning up "second-017200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-017200
E0318 12:04:32.827389    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
E0318 12:04:49.624373    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-017200: (43.5763623s)
helpers_test.go:175: Cleaning up "first-017200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-017200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-017200: (48.3884251s)
--- PASS: TestMinikubeProfile (590.01s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (161.39s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-751800 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
E0318 12:07:21.978759    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-751800 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m40.3761917s)
--- PASS: TestMountStart/serial/StartWithMountFirst (161.39s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (9.94s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-751800 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-751800 ssh -- ls /minikube-host: (9.938919s)
--- PASS: TestMountStart/serial/VerifyMountFirst (9.94s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (162.52s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-751800 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
E0318 12:09:49.613088    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
E0318 12:10:25.178435    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-751800 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m41.5160451s)
--- PASS: TestMountStart/serial/StartWithMountSecond (162.52s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (9.82s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-751800 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-751800 ssh -- ls /minikube-host: (9.8146799s)
--- PASS: TestMountStart/serial/VerifyMountSecond (9.82s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (32.76s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-751800 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-751800 --alsologtostderr -v=5: (32.7548009s)
--- PASS: TestMountStart/serial/DeleteFirst (32.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (9.79s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-751800 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-751800 ssh -- ls /minikube-host: (9.7876159s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (9.79s)

                                                
                                    
x
+
TestMountStart/serial/Stop (31.33s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-751800
E0318 12:12:21.972899    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-751800: (31.3280818s)
--- PASS: TestMountStart/serial/Stop (31.33s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (122.46s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-751800
E0318 12:14:49.626137    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-751800: (2m1.4550856s)
--- PASS: TestMountStart/serial/RestartStopped (122.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (9.93s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-751800 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-751800 ssh -- ls /minikube-host: (9.9272539s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (9.93s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (447.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-642600 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0318 12:17:21.970703    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
E0318 12:19:49.631243    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
E0318 12:21:12.843114    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
E0318 12:22:21.984062    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-642600 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: (7m3.097579s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-642600 status --alsologtostderr
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-642600 status --alsologtostderr: (24.8885224s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (447.99s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (9.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-642600 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-642600 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-642600 -- rollout status deployment/busybox: (3.6160859s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-642600 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-642600 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-642600 -- exec busybox-5b5d89c9d6-48qkw -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-642600 -- exec busybox-5b5d89c9d6-48qkw -- nslookup kubernetes.io: (1.9130659s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-642600 -- exec busybox-5b5d89c9d6-hmhdf -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-642600 -- exec busybox-5b5d89c9d6-48qkw -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-642600 -- exec busybox-5b5d89c9d6-hmhdf -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-642600 -- exec busybox-5b5d89c9d6-48qkw -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-642600 -- exec busybox-5b5d89c9d6-hmhdf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (9.99s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (240.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-642600 -v 3 --alsologtostderr
E0318 12:24:49.626989    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
E0318 12:27:05.196460    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
E0318 12:27:21.972201    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-642600 -v 3 --alsologtostderr: (3m23.1186747s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-642600 status --alsologtostderr
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-642600 status --alsologtostderr: (37.3190396s)
--- PASS: TestMultiNode/serial/AddNode (240.44s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-642600 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (33.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (33.4765996s)
--- PASS: TestMultiNode/serial/ProfileList (33.48s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (377.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-642600 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-642600 status --output json --alsologtostderr: (37.4214901s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-642600 cp testdata\cp-test.txt multinode-642600:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-642600 cp testdata\cp-test.txt multinode-642600:/home/docker/cp-test.txt: (9.8424062s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-642600 ssh -n multinode-642600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-642600 ssh -n multinode-642600 "sudo cat /home/docker/cp-test.txt": (9.8123357s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-642600 cp multinode-642600:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile4179459229\001\cp-test_multinode-642600.txt
E0318 12:29:49.621298    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-642600 cp multinode-642600:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile4179459229\001\cp-test_multinode-642600.txt: (9.8901173s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-642600 ssh -n multinode-642600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-642600 ssh -n multinode-642600 "sudo cat /home/docker/cp-test.txt": (9.8953399s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-642600 cp multinode-642600:/home/docker/cp-test.txt multinode-642600-m02:/home/docker/cp-test_multinode-642600_multinode-642600-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-642600 cp multinode-642600:/home/docker/cp-test.txt multinode-642600-m02:/home/docker/cp-test_multinode-642600_multinode-642600-m02.txt: (17.1959877s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-642600 ssh -n multinode-642600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-642600 ssh -n multinode-642600 "sudo cat /home/docker/cp-test.txt": (9.8989754s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-642600 ssh -n multinode-642600-m02 "sudo cat /home/docker/cp-test_multinode-642600_multinode-642600-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-642600 ssh -n multinode-642600-m02 "sudo cat /home/docker/cp-test_multinode-642600_multinode-642600-m02.txt": (9.7949449s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-642600 cp multinode-642600:/home/docker/cp-test.txt multinode-642600-m03:/home/docker/cp-test_multinode-642600_multinode-642600-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-642600 cp multinode-642600:/home/docker/cp-test.txt multinode-642600-m03:/home/docker/cp-test_multinode-642600_multinode-642600-m03.txt: (17.1336774s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-642600 ssh -n multinode-642600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-642600 ssh -n multinode-642600 "sudo cat /home/docker/cp-test.txt": (9.7539223s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-642600 ssh -n multinode-642600-m03 "sudo cat /home/docker/cp-test_multinode-642600_multinode-642600-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-642600 ssh -n multinode-642600-m03 "sudo cat /home/docker/cp-test_multinode-642600_multinode-642600-m03.txt": (9.8391434s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-642600 cp testdata\cp-test.txt multinode-642600-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-642600 cp testdata\cp-test.txt multinode-642600-m02:/home/docker/cp-test.txt: (9.9114758s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-642600 ssh -n multinode-642600-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-642600 ssh -n multinode-642600-m02 "sudo cat /home/docker/cp-test.txt": (9.8044933s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-642600 cp multinode-642600-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile4179459229\001\cp-test_multinode-642600-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-642600 cp multinode-642600-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile4179459229\001\cp-test_multinode-642600-m02.txt: (9.875778s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-642600 ssh -n multinode-642600-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-642600 ssh -n multinode-642600-m02 "sudo cat /home/docker/cp-test.txt": (9.7969751s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-642600 cp multinode-642600-m02:/home/docker/cp-test.txt multinode-642600:/home/docker/cp-test_multinode-642600-m02_multinode-642600.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-642600 cp multinode-642600-m02:/home/docker/cp-test.txt multinode-642600:/home/docker/cp-test_multinode-642600-m02_multinode-642600.txt: (17.1964066s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-642600 ssh -n multinode-642600-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-642600 ssh -n multinode-642600-m02 "sudo cat /home/docker/cp-test.txt": (9.9577222s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-642600 ssh -n multinode-642600 "sudo cat /home/docker/cp-test_multinode-642600-m02_multinode-642600.txt"
E0318 12:32:21.979817    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-642600 ssh -n multinode-642600 "sudo cat /home/docker/cp-test_multinode-642600-m02_multinode-642600.txt": (9.8856434s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-642600 cp multinode-642600-m02:/home/docker/cp-test.txt multinode-642600-m03:/home/docker/cp-test_multinode-642600-m02_multinode-642600-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-642600 cp multinode-642600-m02:/home/docker/cp-test.txt multinode-642600-m03:/home/docker/cp-test_multinode-642600-m02_multinode-642600-m03.txt: (16.9689658s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-642600 ssh -n multinode-642600-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-642600 ssh -n multinode-642600-m02 "sudo cat /home/docker/cp-test.txt": (9.8823545s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-642600 ssh -n multinode-642600-m03 "sudo cat /home/docker/cp-test_multinode-642600-m02_multinode-642600-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-642600 ssh -n multinode-642600-m03 "sudo cat /home/docker/cp-test_multinode-642600-m02_multinode-642600-m03.txt": (9.8482557s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-642600 cp testdata\cp-test.txt multinode-642600-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-642600 cp testdata\cp-test.txt multinode-642600-m03:/home/docker/cp-test.txt: (9.9840344s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-642600 ssh -n multinode-642600-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-642600 ssh -n multinode-642600-m03 "sudo cat /home/docker/cp-test.txt": (9.9429397s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-642600 cp multinode-642600-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile4179459229\001\cp-test_multinode-642600-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-642600 cp multinode-642600-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile4179459229\001\cp-test_multinode-642600-m03.txt: (10.1659936s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-642600 ssh -n multinode-642600-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-642600 ssh -n multinode-642600-m03 "sudo cat /home/docker/cp-test.txt": (9.9274854s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-642600 cp multinode-642600-m03:/home/docker/cp-test.txt multinode-642600:/home/docker/cp-test_multinode-642600-m03_multinode-642600.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-642600 cp multinode-642600-m03:/home/docker/cp-test.txt multinode-642600:/home/docker/cp-test_multinode-642600-m03_multinode-642600.txt: (17.3136144s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-642600 ssh -n multinode-642600-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-642600 ssh -n multinode-642600-m03 "sudo cat /home/docker/cp-test.txt": (9.9549953s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-642600 ssh -n multinode-642600 "sudo cat /home/docker/cp-test_multinode-642600-m03_multinode-642600.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-642600 ssh -n multinode-642600 "sudo cat /home/docker/cp-test_multinode-642600-m03_multinode-642600.txt": (9.9351142s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-642600 cp multinode-642600-m03:/home/docker/cp-test.txt multinode-642600-m02:/home/docker/cp-test_multinode-642600-m03_multinode-642600-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-642600 cp multinode-642600-m03:/home/docker/cp-test.txt multinode-642600-m02:/home/docker/cp-test_multinode-642600-m03_multinode-642600-m02.txt: (17.2330508s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-642600 ssh -n multinode-642600-m03 "sudo cat /home/docker/cp-test.txt"
E0318 12:34:49.624318    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-642600 ssh -n multinode-642600-m03 "sudo cat /home/docker/cp-test.txt": (9.9425886s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-642600 ssh -n multinode-642600-m02 "sudo cat /home/docker/cp-test_multinode-642600-m03_multinode-642600-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-642600 ssh -n multinode-642600-m02 "sudo cat /home/docker/cp-test_multinode-642600-m03_multinode-642600-m02.txt": (9.7686074s)
--- PASS: TestMultiNode/serial/CopyFile (377.80s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (79.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-642600 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-642600 node stop m03: (25.903265s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-642600 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-642600 status: exit status 7 (26.9056113s)

                                                
                                                
-- stdout --
	multinode-642600
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-642600-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-642600-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 12:35:27.871298    7896 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-642600 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-642600 status --alsologtostderr: exit status 7 (27.0368279s)

                                                
                                                
-- stdout --
	multinode-642600
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-642600-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-642600-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 12:35:54.773540    4324 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0318 12:35:54.871580    4324 out.go:291] Setting OutFile to fd 1040 ...
	I0318 12:35:54.872430    4324 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:35:54.872430    4324 out.go:304] Setting ErrFile to fd 1340...
	I0318 12:35:54.872430    4324 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:35:54.890328    4324 out.go:298] Setting JSON to false
	I0318 12:35:54.890328    4324 mustload.go:65] Loading cluster: multinode-642600
	I0318 12:35:54.890849    4324 notify.go:220] Checking for updates...
	I0318 12:35:54.891118    4324 config.go:182] Loaded profile config "multinode-642600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 12:35:54.891118    4324 status.go:255] checking status of multinode-642600 ...
	I0318 12:35:54.892359    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:35:57.163908    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:35:57.163908    4324 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:35:57.163908    4324 status.go:330] multinode-642600 host status = "Running" (err=<nil>)
	I0318 12:35:57.164522    4324 host.go:66] Checking if "multinode-642600" exists ...
	I0318 12:35:57.165381    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:35:59.381739    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:35:59.381874    4324 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:35:59.381953    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:36:02.054877    4324 main.go:141] libmachine: [stdout =====>] : 172.25.151.112
	
	I0318 12:36:02.054877    4324 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:36:02.054877    4324 host.go:66] Checking if "multinode-642600" exists ...
	I0318 12:36:02.069423    4324 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 12:36:02.069423    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600 ).state
	I0318 12:36:04.306067    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:36:04.306067    4324 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:36:04.306168    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600 ).networkadapters[0]).ipaddresses[0]
	I0318 12:36:06.962259    4324 main.go:141] libmachine: [stdout =====>] : 172.25.151.112
	
	I0318 12:36:06.962259    4324 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:36:06.962971    4324 sshutil.go:53] new ssh client: &{IP:172.25.151.112 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-642600\id_rsa Username:docker}
	I0318 12:36:07.070343    4324 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.0008893s)
	I0318 12:36:07.084944    4324 ssh_runner.go:195] Run: systemctl --version
	I0318 12:36:07.107122    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:36:07.135055    4324 kubeconfig.go:125] found "multinode-642600" server: "https://172.25.151.112:8443"
	I0318 12:36:07.135113    4324 api_server.go:166] Checking apiserver status ...
	I0318 12:36:07.147201    4324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 12:36:07.184801    4324 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2144/cgroup
	W0318 12:36:07.203351    4324 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2144/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 12:36:07.215944    4324 ssh_runner.go:195] Run: ls
	I0318 12:36:07.223816    4324 api_server.go:253] Checking apiserver healthz at https://172.25.151.112:8443/healthz ...
	I0318 12:36:07.234256    4324 api_server.go:279] https://172.25.151.112:8443/healthz returned 200:
	ok
	I0318 12:36:07.234256    4324 status.go:422] multinode-642600 apiserver status = Running (err=<nil>)
	I0318 12:36:07.234256    4324 status.go:257] multinode-642600 status: &{Name:multinode-642600 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 12:36:07.234256    4324 status.go:255] checking status of multinode-642600-m02 ...
	I0318 12:36:07.234256    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600-m02 ).state
	I0318 12:36:09.429789    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:36:09.429789    4324 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:36:09.429789    4324 status.go:330] multinode-642600-m02 host status = "Running" (err=<nil>)
	I0318 12:36:09.429789    4324 host.go:66] Checking if "multinode-642600-m02" exists ...
	I0318 12:36:09.431316    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600-m02 ).state
	I0318 12:36:11.663894    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:36:11.664978    4324 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:36:11.664978    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:36:14.348516    4324 main.go:141] libmachine: [stdout =====>] : 172.25.159.102
	
	I0318 12:36:14.348516    4324 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:36:14.348516    4324 host.go:66] Checking if "multinode-642600-m02" exists ...
	I0318 12:36:14.360772    4324 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 12:36:14.360772    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600-m02 ).state
	I0318 12:36:16.596991    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:36:16.596991    4324 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:36:16.596991    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-642600-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:36:19.294842    4324 main.go:141] libmachine: [stdout =====>] : 172.25.159.102
	
	I0318 12:36:19.294842    4324 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:36:19.294842    4324 sshutil.go:53] new ssh client: &{IP:172.25.159.102 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-642600-m02\id_rsa Username:docker}
	I0318 12:36:19.396345    4324 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.0355424s)
	I0318 12:36:19.409396    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:36:19.438015    4324 status.go:257] multinode-642600-m02 status: &{Name:multinode-642600-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0318 12:36:19.438140    4324 status.go:255] checking status of multinode-642600-m03 ...
	I0318 12:36:19.438938    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-642600-m03 ).state
	I0318 12:36:21.658916    4324 main.go:141] libmachine: [stdout =====>] : Off
	
	I0318 12:36:21.659910    4324 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:36:21.659965    4324 status.go:330] multinode-642600-m03 host status = "Stopped" (err=<nil>)
	I0318 12:36:21.659965    4324 status.go:343] host is not running, skipping remaining checks
	I0318 12:36:21.659965    4324 status.go:257] multinode-642600-m03 status: &{Name:multinode-642600-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (79.85s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (190.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-642600 node start m03 -v=7 --alsologtostderr
E0318 12:37:21.978840    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
E0318 12:37:52.859464    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-642600 node start m03 -v=7 --alsologtostderr: (2m33.1476475s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-642600 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-642600 status -v=7 --alsologtostderr: (36.9608421s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (190.29s)

                                                
                                    
x
+
TestPreload (545.49s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-009800 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0318 12:49:49.638323    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
E0318 12:52:21.988192    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-009800 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (4m39.3158736s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-009800 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-009800 image pull gcr.io/k8s-minikube/busybox: (8.8279266s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-009800
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-009800: (40.7523209s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-009800 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E0318 12:54:32.873241    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
E0318 12:54:49.643192    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-009800 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (2m45.1276154s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-009800 image list
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-009800 image list: (7.6994957s)
helpers_test.go:175: Cleaning up "test-preload-009800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-009800
E0318 12:57:21.991593    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-009800: (43.7644853s)
--- PASS: TestPreload (545.49s)

                                                
                                    
x
+
TestScheduledStopWindows (347.35s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-640000 --memory=2048 --driver=hyperv
E0318 12:59:49.632632    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
E0318 13:00:25.234051    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-640000 --memory=2048 --driver=hyperv: (3m29.5907248s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-640000 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-640000 --schedule 5m: (11.4038768s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-640000 -n scheduled-stop-640000
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-640000 -n scheduled-stop-640000: exit status 1 (10.0341052s)

                                                
                                                
** stderr ** 
	W0318 13:01:24.587082   14268 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-640000 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-640000 -- sudo systemctl show minikube-scheduled-stop --no-page: (9.9854603s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-640000 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-640000 --schedule 5s: (11.2764153s)
E0318 13:02:21.993524    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-640000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-640000: exit status 7 (2.5482399s)

                                                
                                                
-- stdout --
	scheduled-stop-640000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 13:02:55.891199    7236 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-640000 -n scheduled-stop-640000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-640000 -n scheduled-stop-640000: exit status 7 (2.5435627s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 13:02:58.444662    9284 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-640000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-640000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-640000: (29.958847s)
--- PASS: TestScheduledStopWindows (347.35s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (1131.99s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.1750708928.exe start -p running-upgrade-148100 --memory=2200 --vm-driver=hyperv
E0318 13:04:49.643059    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.1750708928.exe start -p running-upgrade-148100 --memory=2200 --vm-driver=hyperv: (8m29.648221s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-148100 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0318 13:12:22.000511    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-148100 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (9m5.8396971s)
helpers_test.go:175: Cleaning up "running-upgrade-148100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-148100
E0318 13:22:22.005586    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-148100: (1m15.7482967s)
--- PASS: TestRunningBinaryUpgrade (1131.99s)

                                                
                                    
x
+
TestKubernetesUpgrade (1413.49s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-340000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-340000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv: (8m27.3036431s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-340000
E0318 13:17:05.247542    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
E0318 13:17:21.990686    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-748800\client.crt: The system cannot find the path specified.
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-340000: (37.0449765s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-340000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-340000 status --format={{.Host}}: exit status 7 (2.6273136s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 13:17:36.408501   10144 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-340000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-340000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv: (7m2.9594315s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-340000 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-340000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperv
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-340000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperv: exit status 106 (391.4067ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-340000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18431
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 13:24:42.236531    9684 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-340000
	    minikube start -p kubernetes-upgrade-340000 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3400002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-340000 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-340000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv
E0318 13:24:49.648927    9120 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-499500\client.crt: The system cannot find the path specified.
version_upgrade_test.go:275: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-340000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv: (6m33.9660108s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-340000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-340000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-340000: (48.9739853s)
--- PASS: TestKubernetesUpgrade (1413.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-148100 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-148100 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (395.4506ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-148100] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18431
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 13:03:30.974588    1960 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.69s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.69s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (890.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.284399452.exe start -p stopped-upgrade-437700 --memory=2200 --vm-driver=hyperv
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.284399452.exe start -p stopped-upgrade-437700 --memory=2200 --vm-driver=hyperv: (7m38.3721573s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.284399452.exe -p stopped-upgrade-437700 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.284399452.exe -p stopped-upgrade-437700 stop: (38.3038588s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-437700 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-437700 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (6m33.8560495s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (890.53s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (11.5s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-437700
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-437700: (11.5025975s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (11.50s)

                                                
                                    

Test skip (32/206)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-499500 --alsologtostderr -v=1]
functional_test.go:912: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-499500 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 2808: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.02s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-499500 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:970: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-499500 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0407363s)

                                                
                                                
-- stdout --
	* [functional-499500] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18431
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 11:05:25.740631    1840 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0318 11:05:25.832265    1840 out.go:291] Setting OutFile to fd 676 ...
	I0318 11:05:25.832265    1840 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 11:05:25.832265    1840 out.go:304] Setting ErrFile to fd 888...
	I0318 11:05:25.832265    1840 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 11:05:25.858814    1840 out.go:298] Setting JSON to false
	I0318 11:05:25.862995    1840 start.go:129] hostinfo: {"hostname":"minikube6","uptime":135250,"bootTime":1710624675,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0318 11:05:25.862995    1840 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 11:05:25.867861    1840 out.go:177] * [functional-499500] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	I0318 11:05:25.872815    1840 notify.go:220] Checking for updates...
	I0318 11:05:25.875546    1840 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0318 11:05:25.878380    1840 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 11:05:25.881159    1840 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0318 11:05:25.884317    1840 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 11:05:25.887028    1840 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 11:05:25.889855    1840 config.go:182] Loaded profile config "functional-499500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:05:25.890854    1840 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:976: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.04s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-499500 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-499500 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0321924s)

                                                
                                                
-- stdout --
	* [functional-499500] minikube v1.32.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18431
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 11:05:30.801728    7264 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0318 11:05:30.876312    7264 out.go:291] Setting OutFile to fd 700 ...
	I0318 11:05:30.876312    7264 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 11:05:30.876312    7264 out.go:304] Setting ErrFile to fd 980...
	I0318 11:05:30.876312    7264 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 11:05:30.900874    7264 out.go:298] Setting JSON to false
	I0318 11:05:30.906135    7264 start.go:129] hostinfo: {"hostname":"minikube6","uptime":135255,"bootTime":1710624675,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0318 11:05:30.906135    7264 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 11:05:30.913921    7264 out.go:177] * [functional-499500] minikube v1.32.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	I0318 11:05:30.918158    7264 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0318 11:05:30.918158    7264 notify.go:220] Checking for updates...
	I0318 11:05:30.924716    7264 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 11:05:30.927938    7264 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0318 11:05:30.930111    7264 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 11:05:30.933316    7264 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 11:05:30.937471    7264 config.go:182] Loaded profile config "functional-499500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:05:30.939032    7264 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1021: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:230: The test WaitService/IngressIP is broken on hyperv https://github.com/kubernetes/minikube/issues/8381
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard